Sample records for additional computing power

  1. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  2. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  3. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  5. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  6. Balancing computation and communication power in power constrained clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less

  7. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  8. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  9. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  10. Software Support for Transiently Powered Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Woude, Joel Matthew

    With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less

  11. Computational Power of Symmetry-Protected Topological Phases.

    PubMed

    Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert

    2017-07-07

    We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.

  12. Computational Power of Symmetry-Protected Topological Phases

    NASA Astrophysics Data System (ADS)

    Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert

    2017-07-01

    We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.

  13. Unity Power Factor Operated PFC Converter Based Power Supply for Computers

    NASA Astrophysics Data System (ADS)

    Singh, Shikha; Singh, Bhim; Bhuvaneswari, G.; Bist, Vashist

    2017-11-01

    Power Supplies (PSs) employed in personal computers pollute the single phase ac mains by drawing distorted current at a substandard Power Factor (PF). The harmonic distortion of the supply current in these personal computers are observed 75% to 90% with the Crest Factor (CF) being very high which escalates losses in the distribution system. To find a tangible solution to these issues, a non-isolated PFC converter is employed at the input of isolated converter that is capable of improving the input power quality apart from regulating the dc voltage at its output. This is given to the isolated stage that yields completely isolated and stiffly regulated multiple output voltages which is the prime requirement of computer PS. The operation of the proposed PS is evaluated under various operating conditions and the results show improved performance depicting nearly unity PF and low input current harmonics. The prototype of this PS is developed in laboratory environment and test results are recorded which corroborate the power quality improvement observed in simulation results under various operating conditions.

  14. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  15. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  16. Maximizing Computational Capability with Minimal Power

    DTIC Science & Technology

    2009-03-01

    Chip -Scale Energy and Power... and Heat Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...OpticalBench Mounting Posts Imager Chip LCDinterfaced withthecomputer P o l a r i z e r P o l a r i z e r XYZ Translator Optical Slide VMM Computational Pixel...Signal routing power / memory: ? Power does not include comm off chip (i.e. accessing memory) Power = ½ C Vdd2 f for CMOS Chip to Chip (10pF load min

  17. Computed lateral rate and acceleration power spectral response of conventional and STOL airplanes to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1975-01-01

    Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.

  18. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information. Subject...

  19. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information. Subject...

  20. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information. Subject...

  1. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information. Subject...

  2. 50 CFR 453.06 - Additional Committee powers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS ENDANGERED SPECIES EXEMPTION PROCESS ENDANGERED SPECIES COMMITTEE § 453.06 Additional Committee powers. (a) Secure information. Subject...

  3. Power subsystem performance prediction /PSPP/ computer program.

    NASA Technical Reports Server (NTRS)

    Weiner, H.; Weinstein, S.

    1972-01-01

    A computer program which simulates the operation of the Viking Orbiter Power Subsystem has been developed. The program simulates the characteristics and interactions of a solar array, battery, battery charge controls, zener diodes, power conditioning equipment, and the battery spacecraft and zener diode-spacecraft thermal interfaces. This program has been used to examine the operation of the Orbiter power subsystem during critical phases of the Viking mission - from launch, through midcourse maneuvers, Mars orbital insertion, orbital trims, Lander separation, solar occultations and unattended operation - until the end of the mission. A typical computer run for the first 24 hours after launch is presented which shows the variations in solar array, zener diode, battery charger, batteries and user load characteristics during this period.

  4. Computer program analyzes and monitors electrical power systems (POSIMO)

    NASA Technical Reports Server (NTRS)

    Jaeger, K.

    1972-01-01

    Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.

  5. Energy Use and Power Levels in New Monitors and Personal Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption

  6. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  7. Turning a $10 Computer into a Powerful DIY Data Logger

    NASA Astrophysics Data System (ADS)

    Schilperoort, B.

    2017-12-01

    Due the rapid advance of consumer electronics, much more powerful and cheaper options are available for DIY projects. The $10 `Raspberry Pi Zero W' computer, with abilities like WiFi, Bluetooth, HDMI video output, and a large cheap memory, can be used for data logging purposes. The computer has a range of input and output pins on the board, with which virtually every type of digital sensor communication is possible. With an extra component, analog measurements can also be made. An extra option is the addition of a camera, which can be connected straight to the board. However, due to the relatively high power consumption (0.5 - 0.7 Watt), the `Zero W' is not optimal for off-the-grid locations. For ease of use, the collected data can be downloaded over a local WiFi network using your smartphone or a laptop. No extra software or skills are needed, it is as simple as visiting a webpage and pressing download, making data collection a quick and easy task. With simple step by step instructions you can set up your own data logger, to collect data from sensors ranging from simple temperature and water level measurements, to sonic anemometers.

  8. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  9. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  10. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  11. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  12. Computer memory power control for the Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  13. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  14. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  15. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  16. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  17. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  18. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  19. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  20. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  1. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2017-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws and geometric features were inspected using a 2-megavolt linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  2. Computer optimization of reactor-thermoelectric space power systems

    NASA Technical Reports Server (NTRS)

    Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.

    1973-01-01

    A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.

  3. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  4. Future computing platforms for science in a power constrained era

    DOE PAGES

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; ...

    2015-12-23

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less

  5. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  6. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  7. 3-D Electromagnetic field analysis of wireless power transfer system using K computer

    NASA Astrophysics Data System (ADS)

    Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi

    2018-05-01

    We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.

  8. Computing the Power-Density Spectrum for an Engineering Model

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1982-01-01

    Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.

  9. Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing

    NASA Astrophysics Data System (ADS)

    Kim, Mooseop; Ryou, Jaecheol

    The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.

  10. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  11. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  12. People Power--Computer Games in the Classroom

    ERIC Educational Resources Information Center

    Hilliard, Ivan

    2014-01-01

    This article presents a case study in the use of the computer simulation game "People Power," developed by the International Center on Nonviolent Conflict. The principal objective of the activity was to offer students an opportunity to understand the dynamics of social conflicts, in a format not possible in a traditional classroom…

  13. Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid

    2017-03-01

    The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.

  14. Modeling and Analysis of Power Processing Systems. [use of a digital computer for designing power plants

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.

    1974-01-01

    The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.

  15. Cloud Computing and the Power to Choose

    ERIC Educational Resources Information Center

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  16. Computer-Aided Engineering Tools | Water Power | NREL

    Science.gov Websites

    energy converters that will provide a full range of simulation capabilities for single devices and arrays simulation of water power technologies on high-performance computers enables the study of complex systems and experimentation. Such simulation is critical to accelerate progress in energy programs within the U.S. Department

  17. Harmonic analysis of spacecraft power systems using a personal computer

    NASA Technical Reports Server (NTRS)

    Williamson, Frank; Sheble, Gerald B.

    1989-01-01

    The effects that nonlinear devices such as ac/dc converters, HVDC transmission links, and motor drives have on spacecraft power systems are discussed. The nonsinusoidal currents, along with the corresponding voltages, are calculated by a harmonic power flow which decouples and solves for each harmonic component individually using an iterative Newton-Raphson algorithm. The sparsity of the harmonic equations and the overall Jacobian matrix is used to an advantage in terms of saving computer memory space and in terms of reducing computation time. The algorithm could also be modified to analyze each harmonic separately instead of all at the same time.

  18. Advanced Computational Techniques for Power Tube Design.

    DTIC Science & Technology

    1986-07-01

    fixturing applications, in addition to the existing computer-aided engineering capabilities. o Helix TWT Manufacturing has Implemented a tooling and fixturing...illustrates the ajor features of this computer network. ) The backbone of our system is a Sytek Broadband Network (LAN) which Interconnects terminals and...automatic network analyzer (FANA) which electrically characterizes the slow-wave helices of traveling-wave tubes ( TWTs ) -- both for engineering design

  19. Saving Energy and Money: A Lesson in Computer Power Management

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  20. Lithium Dinitramide as an Additive in Lithium Power Cells

    NASA Technical Reports Server (NTRS)

    Gorkovenko, Alexander A.

    2007-01-01

    Lithium dinitramide, LiN(NO2)2 has shown promise as an additive to nonaqueous electrolytes in rechargeable and non-rechargeable lithium-ion-based electrochemical power cells. Such non-aqueous electrolytes consist of lithium salts dissolved in mixtures of organic ethers, esters, carbonates, or acetals. The benefits of adding lithium dinitramide (which is also a lithium salt) include lower irreversible loss of capacity on the first charge/discharge cycle, higher cycle life, lower self-discharge, greater flexibility in selection of electrolyte solvents, and greater charge capacity. The need for a suitable electrolyte additive arises as follows: The metallic lithium in the anode of a lithium-ion-based power cell is so highly reactive that in addition to the desired main electrochemical reaction, it engages in side reactions that cause formation of resistive films and dendrites, which degrade performance as quantified in terms of charge capacity, cycle life, shelf life, first-cycle irreversible capacity loss, specific power, and specific energy. The incidence of side reactions can be reduced through the formation of a solid-electrolyte interface (SEI) a thin film that prevents direct contact between the lithium anode material and the electrolyte. Ideally, an SEI should chemically protect the anode and the electrolyte from each other while exhibiting high conductivity for lithium ions and little or no conductivity for electrons. A suitable additive can act as an SEI promoter. Heretofore, most SEI promotion was thought to derive from organic molecules in electrolyte solutions. In contrast, lithium dinitramide is inorganic. Dinitramide compounds are known as oxidizers in rocket-fuel chemistry and until now, were not known as SEI promoters in battery chemistry. Although the exact reason for the improvement afforded by the addition of lithium dinitramide is not clear, it has been hypothesized that lithium dinitramide competes with other electrolyte constituents to react with

  1. NCRETURN Computer Program for Evaluating Investments Revised to Provide Additional Information

    Treesearch

    Allen L. Lundgren; Dennis L. Schweitzer

    1971-01-01

    Reports a modified version of NCRETURN, a computer program for evauating forestry investments. The revised version, RETURN, provides additional information about each investment, including future net worths and benefit-cost ratios, with no added input.

  2. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    PubMed

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  3. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design

    PubMed Central

    Vanommeslaeghe, K.

    2014-01-01

    Background Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. Scope of Review As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular bimolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields’ parametrization philosophy and methodology. Major Conclusions Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1 microsecond on proteins, DNA, lipids and carbohydrates. General Significance Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers a model that is an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. PMID:25149274

  4. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  5. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Felicia Angelica; Waymire, Russell L.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less

  6. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  7. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.

    1979-01-01

    The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.

  8. DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS (REAR), ROOM 8A - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  9. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design.

    PubMed

    Vanommeslaeghe, K; MacKerell, A D

    2015-05-01

    Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular biomolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields' parametrization philosophy and methodology. Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1μs on proteins, DNA, lipids and carbohydrates. Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Throneburg, E. B.; Jones, J. M.

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presentedmore » that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)« less

  11. Thermal and Power Challenges in High Performance Computing Systems

    NASA Astrophysics Data System (ADS)

    Natarajan, Venkat; Deshpande, Anand; Solanki, Sudarshan; Chandrasekhar, Arun

    2009-05-01

    This paper provides an overview of the thermal and power challenges in emerging high performance computing platforms. The advent of new sophisticated applications in highly diverse areas such as health, education, finance, entertainment, etc. is driving the platform and device requirements for future systems. The key ingredients of future platforms are vertically integrated (3D) die-stacked devices which provide the required performance characteristics with the associated form factor advantages. Two of the major challenges to the design of through silicon via (TSV) based 3D stacked technologies are (i) effective thermal management and (ii) efficient power delivery mechanisms. Some of the key challenges that are articulated in this paper include hot-spot superposition and intensification in a 3D stack, design/optimization of thermal through silicon vias (TTSVs), non-uniform power loading of multi-die stacks, efficient on-chip power delivery, minimization of electrical hotspots etc.

  12. Additively Manufactured IN718 Components with Wirelessly Powered and Interrogated Embedded Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attridge, Paul; Bajekal, Sanjay; Klecka, Michael

    A methodology is described for embedding commercial-off-the-shelf sensors together with wireless communication and power circuit elements using direct laser metal sintered additively manufactured components. Physics based models of the additive manufacturing processes and sensor/wireless level performance models guided the design and embedment processes. A combination of cold spray deposition and laser engineered net shaping was used to fashion the transmitter/receiving elements and embed the sensors, thereby providing environmental protection and component robustness/survivability for harsh conditions. By design, this complement of analog and digital sensors were wirelessly powered and interrogated using a health and utilization monitoring system; enabling real-time, in situmore » prognostics and diagnostics.« less

  13. Computational Fluid Dynamics and Additive Manufacturing to Diagnose and Treat Cardiovascular Disease.

    PubMed

    Randles, Amanda; Frakes, David H; Leopold, Jane A

    2017-11-01

    Noninvasive engineering models are now being used for diagnosing and planning the treatment of cardiovascular disease. Techniques in computational modeling and additive manufacturing have matured concurrently, and results from simulations can inform and enable the design and optimization of therapeutic devices and treatment strategies. The emerging synergy between large-scale simulations and 3D printing is having a two-fold benefit: first, 3D printing can be used to validate the complex simulations, and second, the flow models can be used to improve treatment planning for cardiovascular disease. In this review, we summarize and discuss recent methods and findings for leveraging advances in both additive manufacturing and patient-specific computational modeling, with an emphasis on new directions in these fields and remaining open questions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Using a Computer Game to Reinforce Skills in Addition Basic Facts in Second Grade.

    ERIC Educational Resources Information Center

    Kraus, William H.

    1981-01-01

    A computer-generated game called Fish Chase was developed to present drill-and-practice exercises on addition facts. The subjects of the study were 19 second-grade pupils. The results indicate a computer game can be used effectively to increase proficiency with basic facts. (MP)

  15. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  16. Computer program for afterheat temperature distribution for mobile nuclear power plant

    NASA Technical Reports Server (NTRS)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  17. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  18. Computations of Combustion-Powered Actuation for Dynamic Stall Suppression

    NASA Technical Reports Server (NTRS)

    Jee, Solkeun; Bowles, Patrick O.; Matalanis, Claude G.; Min, Byung-Young; Wake, Brian E.; Crittenden, Tom; Glezer, Ari

    2016-01-01

    A computational framework for the simulation of dynamic stall suppression with combustion-powered actuation (COMPACT) is validated against wind tunnel experimental results on a VR-12 airfoil. COMPACT slots are located at 10% chord from the leading edge of the airfoil and directed tangentially along the suction-side surface. Helicopter rotor-relevant flow conditions are used in the study. A computationally efficient two-dimensional approach, based on unsteady Reynolds-averaged Navier-Stokes (RANS), is compared in detail against the baseline and the modified airfoil with COMPACT, using aerodynamic forces, pressure profiles, and flow-field data. The two-dimensional RANS approach predicts baseline static and dynamic stall very well. Most of the differences between the computational and experimental results are within two standard deviations of the experimental data. The current framework demonstrates an ability to predict COMPACT efficacy across the experimental dataset. Enhanced aerodynamic lift on the downstroke of the pitching cycle due to COMPACT is well predicted, and the cycleaveraged lift enhancement computed is within 3% of the test data. Differences with experimental data are discussed with a focus on three-dimensional features not included in the simulations and the limited computational model for COMPACT.

  19. Phosphoric acid fuel cell power plant system performance model and computer program

    NASA Technical Reports Server (NTRS)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  20. Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment

    ERIC Educational Resources Information Center

    Lin, Jing-Wen

    2016-01-01

    This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…

  1. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    PubMed

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Computational simulations of supersonic magnetohydrodynamic flow control, power and propulsion systems

    NASA Astrophysics Data System (ADS)

    Wan, Tian

    This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.

  3. 3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. ELEVATIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 4. Plan no. 10,548. Scale 1/4 inch to the foot, elevations, and one inch to the foot, sections and details. April 30, 1945, last revised 6/19/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  4. PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.

    2017-12-01

    Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.

  5. Thread selection according to power characteristics during context switching on compute nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Randles, Amanda E.

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switchmore » if the criteria for a thread context switch are satisfied, including executing the next thread of execution.« less

  6. Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry Algorithms Library (CGAL)

    DTIC Science & Technology

    2015-06-01

    10-2014 to 00-11-2014 4. TITLE AND SUBTITLE Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry...ABSTRACT Postprocessing of 3-dimensional (3-D) topologies that are defined as a set of voxels using the Computational Geometry Algorithms Library (CGAL... computational geometry algorithms, several of which are suited to the task. The work flow described in this report involves first defining a set of

  7. Li-ion synaptic transistor for low power analog computing

    DOE PAGES

    Fuller, Elliot J.; Gabaly, Farid El; Leonard, Francois; ...

    2016-11-22

    Nonvolatile redox transistors (NVRTs) based upon Li-ion battery materials are demonstrated as memory elements for neuromorphic computer architectures with multi-level analog states, “write” linearity, low-voltage switching, and low power dissipation. Simulations of back propagation using the device properties reach ideal classification accuracy. Finally, physics-based simulations predict energy costs per “write” operation of <10 aJ when scaled to 200 nm × 200 nm.

  8. The Power of Computer-aided Tomography to Investigate Marine Benthic Communities

    EPA Science Inventory

    Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...

  9. 4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. FLOOR PLAN AND SECTIONS, ADDITION TO POWER HOUSE. United Engineering Company Ltd., Alameda Shipyard. Also includes plot plan at 1 inch to 100 feet. John Hudspeth, architect, foot of Main Street, Alameda, California. Sheet 3. Plan no. 10,548. Scale 1/4 inch and h inch to the foot. April 30, 1945, last revised 6/22/45. pencil on vellum - United Engineering Company Shipyard, Boiler House, 2900 Main Street, Alameda, Alameda County, CA

  10. Prediction and characterization of application power use in a high-performance computing environment

    DOE PAGES

    Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...

    2017-02-27

    Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

  11. Computational models of an inductive power transfer system for electric vehicle battery charge

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  12. On the Computational Power of Spiking Neural P Systems with Self-Organization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  13. On the Computational Power of Spiking Neural P Systems with Self-Organization.

    PubMed

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-10

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  14. On the Computational Power of Spiking Neural P Systems with Self-Organization

    PubMed Central

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  15. Measured energy savings and performance of power-managed personal computers and monitors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, B.; Piette, M.A.; Kinney, K.

    1996-08-01

    Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two othermore » research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.« less

  16. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic.

    PubMed

    Xie, Kun; Fox, Grace E; Liu, Jun; Lyu, Cheng; Lee, Jason C; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies-the long-presumed computational motif-are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic ( N = 2 i -1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors-the synaptic switch for learning and memory-were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques-which preferentially encode specific and low-combinatorial features and project inter-cortically-is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6-which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems-is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain's basic computational algorithm is indeed

  17. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

    PubMed Central

    Xie, Kun; Fox, Grace E.; Liu, Jun; Lyu, Cheng; Lee, Jason C.; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z.

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational

  18. Training auscultatory skills: computer simulated heart sounds or additional bedside training? A randomized trial on third-year medical students

    PubMed Central

    2010-01-01

    Background The present study compares the value of additional use of computer simulated heart sounds, to conventional bedside auscultation training, on the cardiac auscultation skills of 3rd year medical students at Oslo University Medical School. Methods In addition to their usual curriculum courses, groups of seven students each were randomized to receive four hours of additional auscultation training either employing a computer simulator system or adding on more conventional bedside training. Cardiac auscultation skills were afterwards tested using live patients. Each student gave a written description of the auscultation findings in four selected patients, and was rewarded from 0-10 points for each patient. Differences between the two study groups were evaluated using student's t-test. Results At the auscultation test no significant difference in mean score was found between the students who had used additional computer based sound simulation compared to additional bedside training. Conclusions Students at an early stage of their cardiology training demonstrated equal performance of cardiac auscultation whether they had received an additional short auscultation course based on computer simulated training, or had had additional bedside training. PMID:20082701

  19. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  20. Thread selection according to predefined power characteristics during context switching on compute nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switchmore » if the criteria for a thread context switch are satisfied, including executing the next thread of execution.« less

  1. Ultra-low power high precision magnetotelluric receiver array based customized computer and wireless sensor network

    NASA Astrophysics Data System (ADS)

    Chen, R.; Xi, X.; Zhao, X.; He, L.; Yao, H.; Shen, R.

    2016-12-01

    Dense 3D magnetotelluric (MT) data acquisition owns the benefit of suppressing the static shift and topography effect, can achieve high precision and high resolution inversion for underground structure. This method may play an important role in mineral exploration, geothermal resources exploration, and hydrocarbon exploration. It's necessary to reduce the power consumption greatly of a MT signal receiver for large-scale 3D MT data acquisition while using sensor network to monitor data quality of deployed MT receivers. We adopted a series of technologies to realized above goal. At first, we designed an low-power embedded computer which can couple with other parts of MT receiver tightly and support wireless sensor network. The power consumption of our embedded computer is less than 1 watt. Then we designed 4-channel data acquisition subsystem which supports 24-bit analog-digital conversion, GPS synchronization, and real-time digital signal processing. Furthermore, we developed the power supply and power management subsystem for MT receiver. At last, a series of software, which support data acquisition, calibration, wireless sensor network, and testing, were developed. The software which runs on personal computer can monitor and control over 100 MT receivers on the field for data acquisition and quality control. The total power consumption of the receiver is about 2 watts at full operation. The standby power consumption is less than 0.1 watt. Our testing showed that the MT receiver can acquire good quality data at ground with electrical dipole length as 3 m. Over 100 MT receivers were made and used for large-scale geothermal exploration in China with great success.

  2. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  3. Application of Nearly Linear Solvers to Electric Power System Computation

    NASA Astrophysics Data System (ADS)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  4. Computation and Experiment: A Powerful Combination to Understand and Predict Reactivities.

    PubMed

    Sperger, Theresa; Sanhueza, Italo A; Schoenebeck, Franziska

    2016-06-21

    Computational chemistry has become an established tool for the study of the origins of chemical phenomena and examination of molecular properties. Because of major advances in theory, hardware and software, calculations of molecular processes can nowadays be done with reasonable accuracy on a time-scale that is competitive or even faster than experiments. This overview will highlight broad applications of computational chemistry in the study of organic and organometallic reactivities, including catalytic (NHC-, Cu-, Pd-, Ni-catalyzed) and noncatalytic examples of relevance to organic synthesis. The selected examples showcase the ability of computational chemistry to rationalize and also predict reactivities of broad significance. A particular emphasis is placed on the synergistic interplay of computations and experiments. It is discussed how this approach allows one to (i) gain greater insight than the isolated techniques, (ii) inspire novel chemistry avenues, and (iii) assist in reaction development. Examples of successful rationalizations of reactivities are discussed, including the elucidation of mechanistic features (radical versus polar) and origins of stereoselectivity in NHC-catalyzed reactions as well as the rationalization of ligand effects on ligation states and selectivity in Pd- and Ni-catalyzed transformations. Beyond explaining, the synergistic interplay of computation and experiments is then discussed, showcasing the identification of the likely catalytically active species as a function of ligand, additive, and solvent in Pd-catalyzed cross-coupling reactions. These may vary between mono- or bisphosphine-bound or even anionic Pd complexes in polar media in the presence of coordinating additives. These fundamental studies also inspired avenues in catalysis via dinuclear Pd(I) cycles. Detailed mechanistic studies supporting the direct reactivity of Pd(I)-Pd(I) with aryl halides as well as applications of air-stable dinuclear Pd(I) catalysts are

  5. Analysis and Design of Bridgeless Switched Mode Power Supply for Computers

    NASA Astrophysics Data System (ADS)

    Singh, S.; Bhuvaneswari, G.; Singh, B.

    2014-09-01

    Switched mode power supplies (SMPSs) used in computers need multiple isolated and stiffly regulated output dc voltages with different current ratings. These isolated multiple output dc voltages are obtained by using a multi-winding high frequency transformer (HFT). A half-bridge dc-dc converter is used here for obtaining different isolated and well regulated dc voltages. In the front end, non-isolated Single Ended Primary Inductance Converters (SEPICs) are added to improve the power quality in terms of low input current harmonics and high power factor (PF). Two non-isolated SEPICs are connected in a way to completely eliminate the need of single-phase diode-bridge rectifier at the front end. Output dc voltages at both the non-isolated and isolated stages are controlled and regulated separately for power quality improvement. A voltage mode control approach is used in the non-isolated SEPIC stage for simple and effective control whereas average current control is used in the second isolated stage.

  6. Symmetry-protected topological phases with uniform computational power in one dimension

    NASA Astrophysics Data System (ADS)

    Raussendorf, Robert; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Stephen, David T.

    2017-07-01

    We investigate the usefulness of ground states of quantum spin chains with symmetry-protected topological order (SPTO) for measurement-based quantum computation. We show that, in spatial dimension 1, if an SPTO phase protects the identity gate, then, subject to an additional symmetry condition that is satisfied in all cases so far investigated, it can also be used for quantum computation.

  7. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.

    1979-01-01

    A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.

  8. Computer simulation of magnetization-controlled shunt reactors for calculating electromagnetic transients in power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpov, A. S.

    2013-01-15

    A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.

  9. Design and fabrication of a sleep apnea device using computer-aided design/additive manufacture technologies.

    PubMed

    Al Mortadi, Noor; Eggbeer, Dominic; Lewis, Jeffrey; Williams, Robert J

    2013-04-01

    The aim of this study was to analyze the latest innovations in additive manufacture techniques and uniquely apply them to dentistry, to build a sleep apnea device requiring rotating hinges. Laser scanning was used to capture the three-dimensional topography of an upper and lower dental cast. The data sets were imported into an appropriate computer-aided design software environment, which was used to design a sleep apnea device. This design was then exported as a stereolithography file and transferred for three-dimensional printing by an additive manufacture machine. The results not only revealed that the novel computer-based technique presented provides new design opportunities but also highlighted limitations that must be addressed before the techniques can become clinically viable.

  10. Pulsar discovery by global volunteer computing.

    PubMed

    Knispel, B; Allen, B; Cordes, J M; Deneva, J S; Anderson, D; Aulbert, C; Bhat, N D R; Bock, O; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Crawford, F; Demorest, P B; Fehrmann, H; Freire, P C C; Gonzalez, M E; Hammer, D; Hessels, J W T; Jenet, F A; Kasian, L; Kaspi, V M; Kramer, M; Lazarus, P; van Leeuwen, J; Lorimer, D R; Lyne, A G; Machenschalk, B; McLaughlin, M A; Messenger, C; Nice, D J; Papa, M A; Pletsch, H J; Prix, R; Ransom, S M; Siemens, X; Stairs, I H; Stappers, B W; Stovall, K; Venkataraman, A

    2010-09-10

    Einstein@Home aggregates the computer power of hundreds of thousands of volunteers from 192 countries to mine large data sets. It has now found a 40.8-hertz isolated pulsar in radio survey data from the Arecibo Observatory taken in February 2007. Additional timing observations indicate that this pulsar is likely a disrupted recycled pulsar. PSR J2007+2722's pulse profile is remarkably wide with emission over almost the entire spin period; the pulsar likely has closely aligned magnetic and spin axes. The massive computing power provided by volunteers should enable many more such discoveries.

  11. Computational effects of inlet representation on powered hypersonic, airbreathing models

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    Computational results are presented to illustrate the powered aftbody effects of representing the scramjet inlet on a generic hypersonic vehicle with a fairing, to divert the external flow, as compared to an operating flow-through scramjet inlet. This study is pertinent to the ground testing of hypersonic, airbreathing models employing scramjet exhaust flow simulation in typical small-scale hypersonic wind tunnels. The comparison of aftbody effects due to inlet representation is well-suited for computational study, since small model size typically precludes the ability to ingest flow into the inlet and perform exhaust simulation at the same time. Two-dimensional analysis indicates that, although flowfield differences exist for the two types of inlet representations, little, if any, difference in surface aftbody characteristics is caused by fairing over the inlet.

  12. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  13. Computational Analysis of Powered Lift Augmentation for the LEAPTech Distributed Electric Propulsion Wing

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Wiese, Michael R.; Farr, Norma L.

    2017-01-01

    A computational study of a distributed electric propulsion wing with a 40deg flap deflection has been completed using FUN3D. Two lift-augmentation power conditions were compared with the power-off configuration on the high-lift wing (40deg flap) at a 73 mph freestream flow and for a range of angles of attack from -5 degrees to 14 degrees. The computational study also included investigating the benefit of corotating versus counter-rotating propeller spin direction to powered-lift performance. The results indicate a large benefit in lift coefficient, over the entire range of angle of attack studied, by using corotating propellers that all spin counter to the wingtip vortex. For the landing condition, 73 mph, the unpowered 40deg flap configuration achieved a maximum lift coefficient of 2.3. With high-lift blowing the maximum lift coefficient increased to 5.61. Therefore, the lift augmentation is a factor of 2.4. Taking advantage of the fullspan lift augmentation at similar performance means that a wing powered with the distributed electric propulsion system requires only 42 percent of the wing area of the unpowered wing. This technology will allow wings to be 'cruise optimized', meaning that they will be able to fly closer to maximum lift over drag conditions at the design cruise speed of the aircraft.

  14. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  15. A digital computer simulation and study of a direct-energy-transfer power-conditioning system

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.

    1974-01-01

    A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.

  16. Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

    NASA Astrophysics Data System (ADS)

    Matsypura, Dmytro

    In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following

  17. Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing

    DTIC Science & Technology

    2006-11-01

    in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and

  18. Computer program for design and performance analysis of navigation-aid power systems. Program documentation. Volume 1: Software requirements document

    NASA Technical Reports Server (NTRS)

    Goltz, G.; Kaiser, L. M.; Weiner, H.

    1977-01-01

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.

  19. The Ames Power Monitoring System

    NASA Technical Reports Server (NTRS)

    Osetinsky, Leonid; Wang, David

    2003-01-01

    The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also

  20. Regional geoid computation by least squares modified Hotine's formula with additive corrections

    NASA Astrophysics Data System (ADS)

    Märdla, Silja; Ellmann, Artu; Ågren, Jonas; Sjöberg, Lars E.

    2018-03-01

    Geoid and quasigeoid modelling from gravity anomalies by the method of least squares modification of Stokes's formula with additive corrections is adapted for the usage with gravity disturbances and Hotine's formula. The biased, unbiased and optimum versions of least squares modification are considered. Equations are presented for the four additive corrections that account for the combined (direct plus indirect) effect of downward continuation (DWC), topographic, atmospheric and ellipsoidal corrections in geoid or quasigeoid modelling. The geoid or quasigeoid modelling scheme by the least squares modified Hotine formula is numerically verified, analysed and compared to the Stokes counterpart in a heterogeneous study area. The resulting geoid models and the additive corrections computed both for use with Stokes's or Hotine's formula differ most in high topography areas. Over the study area (reaching almost 2 km in altitude), the approximate geoid models (before the additive corrections) differ by 7 mm on average with a 3 mm standard deviation (SD) and a maximum of 1.3 cm. The additive corrections, out of which only the DWC correction has a numerically significant difference, improve the agreement between respective geoid or quasigeoid models to an average difference of 5 mm with a 1 mm SD and a maximum of 8 mm.

  1. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less

  2. [Restoration filtering based on projection power spectrum for single-photon emission computed tomography].

    PubMed

    Kubo, N

    1995-04-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical "least squares filter" theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the "Butterworth" filtering method (cut-off frequency of 0.15 cycles/pixel), and "Wiener" filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99mTc filled cylinder, were used. NMSE of the "Butterworth" filter, "Wiener" filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images.

  3. Heterotic computing: exploiting hybrid computational devices.

    PubMed

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  4. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  5. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C. C.

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less

  6. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, M.; Klimeck, G.; Hanks, D.

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.

  7. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  8. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    NASA Astrophysics Data System (ADS)

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10-13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  9. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation.

    PubMed

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-26

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm(3), the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10(-13) J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  10. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  11. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 2: SYSTID user's guide

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.

  12. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS), phase 1

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The large-signal behaviors of a regulator depend largely on the type of power circuit topology and control. Thus, for maximum flexibility, it is best to develop models for each functional block a independent modules. A regulator can then be configured by collecting appropriate pre-defined modules for each functional block. In order to complete the component model generation for a comprehensive spacecraft power system, the following modules were developed: solar array switching unit and control; shunt regulators; and battery discharger. The capability of each module is demonstrated using a simplified Direct Energy Transfer (DET) system. Large-signal behaviors of solar array power systems were analyzed. Stability of the solar array system operating points with a nonlinear load is analyzed. The state-plane analysis illustrates trajectories of the system operating point under various conditions. Stability and transient responses of the system operating near the solar array's maximum power point are also analyzed. The solar array system mode of operation is described using the DET spacecraft power system. The DET system is simulated for various operating conditions. Transfer of the software program CAMAPPS (Computer Aided Modeling and Analysis of Power Processing Systems) to NASA/GSFC (Goddard Space Flight Center) was accomplished.

  13. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less

  14. 78 FR 47805 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ... Documents Access and Management System (ADAMS): You may access publicly available documents online in the... Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants,'' issued for... Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION: Revision...

  15. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  16. 19 CFR 201.14 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 201.14 Section 201.14 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations...

  17. 19 CFR 201.14 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 201.14 Section 201.14 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations...

  18. 19 CFR 210.6 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 210.6 Section 210.6 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT...

  19. Spacecraft solid state power distribution switch

    NASA Technical Reports Server (NTRS)

    Praver, G. A.; Theisinger, P. C.

    1986-01-01

    As a spacecraft performs its mission, various loads are connected to the spacecraft power bus in response to commands from an on board computer, a function called power distribution. For the Mariner Mark II set of planetary missions, the power bus is 30 volts dc and when loads are connected or disconnected, both the bus and power return side must be switched. In addition, the power distribution function must be immune to single point failures and, when power is first applied, all switches must be in a known state. Traditionally, these requirements have been met by electromechanical latching relays. This paper describes a solid state switch which not only satisfies the requirements but incorporates several additional features including soft turn on, programmable current trip point with noise immunity, instantaneous current limiting, and direct telemetry of load currents and switch status. A breadboard of the design has been constructed and some initial test results are included.

  20. Enhanced MFC power production and struvite recovery by the addition of sea salts to urine.

    PubMed

    Merino-Jimenez, Irene; Celorrio, Veronica; Fermin, David J; Greenman, John; Ieropoulos, Ioannis

    2017-02-01

    Urine is an excellent fuel for electricity generation in Microbial Fuel Cells (MFCs), especially with practical implementations in mind. Moreover, urine has a high content in nutrients which can be easily recovered. Struvite (MgNH 4 PO 4 ·6H 2 O) crystals naturally precipitate in urine, but this reaction can be enhanced by the introduction of additional magnesium. In this work, the effect of magnesium additives on the power output of the MFCs and on the catholyte generation is evaluated. Several magnesium sources including MgCl 2 , artificial sea water and a commercially available sea salts mixture for seawater preparation (SeaMix) were mixed with real fresh human urine in order to enhance struvite precipitation. The supernatant of each mixture was tested as a feedstock for the MFCs and it was evaluated in terms of power output and catholyte generation. The commercial SeaMix showed the best performance in terms of struvite precipitation, increasing the amount of struvite in the solid collected from 21% to 94%. Moreover, the SeaMix increased the maximum power performance of the MFCs by over 10% and it also changed the properties of the catholyte collected by increasing the pH, conductivity and the concentration of chloride ions. These results demonstrate that the addition of sea-salts to real urine is beneficial for both struvite recovery and electricity generation in MFCs. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Computational Fluid Dynamics Simulation Study of Active Power Control in Wind Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleming, Paul; Aho, Jake; Gebraad, Pieter

    2016-08-01

    This paper presents an analysis performed on a wind plant's ability to provide active power control services using a high-fidelity computational fluid dynamics-based wind plant simulator. This approach allows examination of the impact on wind turbine wake interactions within a wind plant on performance of the wind plant controller. The paper investigates several control methods for improving performance in waked conditions. One method uses wind plant wake controls, an active field of research in which wind turbine control systems are coordinated to account for their wakes, to improve the overall performance. Results demonstrate the challenge of providing active power controlmore » in waked conditions but also the potential methods for improving this performance.« less

  2. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    NASA Astrophysics Data System (ADS)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  3. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  4. Additional support for the TDK/MABL computer program

    NASA Technical Reports Server (NTRS)

    Nickerson, G. R.; Dunn, Stuart S.

    1993-01-01

    An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort.

  5. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  6. Comparison of circular orbit and Fourier power series ephemeris representations for backup use by the upper atmosphere research satellite onboard computer

    NASA Technical Reports Server (NTRS)

    Kast, J. R.

    1988-01-01

    The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.

  7. X-ray computed tomography for additive manufacturing: a review

    NASA Astrophysics Data System (ADS)

    Thompson, A.; Maskery, I.; Leach, R. K.

    2016-07-01

    In this review, the use of x-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts. The XCT technology and AM processes are summarised, and their historical use is documented. The use of XCT and AM as tools for medical reverse engineering is discussed, and the transition of XCT from a tool used solely for imaging to a vital metrological instrument is documented. The current states of the combined technologies are then examined in detail, separated into porosity measurements and general dimensional measurements. In the conclusions of this review, the limitation of resolution on improvement of porosity measurements and the lack of research regarding the measurement of surface texture are identified as the primary barriers to ongoing adoption of XCT in AM. The limitations of both AM and XCT regarding slow speeds and high costs, when compared to other manufacturing and measurement techniques, are also noted as general barriers to continued adoption of XCT and AM.

  8. Multichannel Phase and Power Detector

    NASA Technical Reports Server (NTRS)

    Li, Samuel; Lux, James; McMaster, Robert; Boas, Amy

    2006-01-01

    An electronic signal-processing system determines the phases of input signals arriving in multiple channels, relative to the phase of a reference signal with which the input signals are known to be coherent in both phase and frequency. The system also gives an estimate of the power levels of the input signals. A prototype of the system has four input channels that handle signals at a frequency of 9.5 MHz, but the basic principles of design and operation are extensible to other signal frequencies and greater numbers of channels. The prototype system consists mostly of three parts: An analog-to-digital-converter (ADC) board, which coherently digitizes the input signals in synchronism with the reference signal and performs some simple processing; A digital signal processor (DSP) in the form of a field-programmable gate array (FPGA) board, which performs most of the phase- and power-measurement computations on the digital samples generated by the ADC board; and A carrier board, which allows a personal computer to retrieve the phase and power data. The DSP contains four independent phase-only tracking loops, each of which tracks the phase of one of the preprocessed input signals relative to that of the reference signal (see figure). The phase values computed by these loops are averaged over intervals, the length of which is chosen to obtain output from the DSP at a desired rate. In addition, a simple sum of squares is computed for each channel as an estimate of the power of the signal in that channel. The relative phases and the power level estimates computed by the DSP could be used for diverse purposes in different settings. For example, if the input signals come from different elements of a phased-array antenna, the phases could be used as indications of the direction of arrival of a received signal and/or as feedback for electronic or mechanical beam steering. The power levels could be used as feedback for automatic gain control in preprocessing of incoming signals

  9. Effect of calcium formate as an additive on desulfurization in power plants.

    PubMed

    Li, Zhenhua; Xie, Chunfang; Lv, Jing; Zhai, Ruiguo

    2018-05-01

    SO 2 in flue gas needs to be eliminated to alleviate air pollution. As the quality of coal decreases and environmental standard requirements become more stringent, the high-efficiency desulfurization of flue gas faces more and more challenges. As an economical and environmentally friendly solution, the effect of calcium formate as an additive on desulfurization efficiency in the wet flue gas desulfurization (WFGD) process was studied for the first time. Improvement of the desulfurization efficiency was achieved with limited change in pH after calcium formate was added into the reactor, and it was found to work better than other additives tested. The positive effects were further verified in a power plant, which showed that adding calcium formate could promote the dissolution of calcium carbonate, accelerate the growth of gypsum crystals and improve the efficiency of desulfurization. Thus, calcium formate was proved to be an effective additive and can potentially be used to reduce the amount of limestone slurry required, as well as the energy consumption and operating costs in industrial desulfurization. Copyright © 2017. Published by Elsevier B.V.

  10. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  11. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  12. System and method for controlling power consumption in a computer system based on user satisfaction

    DOEpatents

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  13. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  14. Addition of flexible body option to the TOLA computer program, part 1

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    This report describes a flexible body option that was developed and added to the Takeoff and Landing Analysis (TOLA) computer program. The addition of the flexible body option to TOLA allows it to be used to study essentially any conventional type airplane in the ground operating environment. It provides the capability to predict the total motion of selected points on the analytical methods incorporated in the program and operating instructions for the option are described. A program listing is included along with several example problems to aid in interpretation of the operating instructions and to illustrate program usage.

  15. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  16. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  17. Soft drink effects on sensorimotor rhythm brain computer interface performance and resting-state spectral power.

    PubMed

    Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He

    2016-08-01

    Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.

  18. Proates a computer modelling system for power plant: Its description and application to heatrate improvement within PowerGen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, C.H.; Ready, A.B.; Rea, J.

    1995-06-01

    Versions of the computer program PROATES (PROcess Analysis for Thermal Energy Systems) have been used since 1979 to analyse plant performance improvement proposals relating to existing plant and also to evaluate new plant designs. Several plant modifications have been made to improve performance based on the model predictions and the predicted performance has been realised in practice. The program was born out of a need to model the overall steady state performance of complex plant to enable proposals to change plant component items or operating strategy to be evaluated. To do this with confidence it is necessary to model themore » multiple thermodynamic interactions between the plant components. The modelling system is modular in concept allowing the configuration of individual plant components to represent any particular power plant design. A library exists of physics based modules which have been extensively validated and which provide representations of a wide range of boiler, turbine and CW system components. Changes to model data and construction is achieved via a user friendly graphical model editing/analysis front-end with results being presented via the computer screen or hard copy. The paper describes briefly the modelling system but concentrates mainly on the application of the modelling system to assess design re-optimisation, firing with different fuels and the re-powering of an existing plant.« less

  19. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  20. U.S. EPA’s Computational Toxicology Program: Innovation Powered by Chemistry (Dalton State College presentation)

    EPA Science Inventory

    Invited presentation at Dalton College, Dalton, GA to the Alliance for Innovation & Sustainability, April 20, 2017. U.S. EPA’s Computational Toxicology Program: Innovation Powered by Chemistry It is estimated that tens of thousands of commercial and industrial chemicals are ...

  1. Children, Computers, and Powerful Ideas

    ERIC Educational Resources Information Center

    Bull, Glen

    2005-01-01

    Today it is commonplace that computers and technology permeate almost every aspect of education. In the late 1960s, though, the idea that computers could serve as a catalyst for thinking about the way children learn was a radical concept. In the early 1960s, Seymour Papert joined the faculty of MIT and founded the Artificial Intelligence Lab with…

  2. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  3. A Framework to Design the Computational Load Distribution of Wireless Sensor Networks in Power Consumption Constrained Environments

    PubMed Central

    Sánchez-Álvarez, David; Rodríguez-Pérez, Francisco-Javier

    2018-01-01

    In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN. PMID:29570645

  4. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  5. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  6. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  7. User's manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 2 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.

  8. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F.; Siebers, Jeffrey V.

    2016-01-15

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracymore » of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent

  9. A Freeware Path to Neutron Computed Tomography

    NASA Astrophysics Data System (ADS)

    Schillinger, Burkhard; Craft, Aaron E.

    Neutron computed tomography has become a routine method at many neutron sources due to the availability of digital detection systems, powerful computers and advanced software. The commercial packages Octopus by Inside Matters and VGStudio by Volume Graphics have been established as a quasi-standard for high-end computed tomography. However, these packages require a stiff investment and are available to the users only on-site at the imaging facility to do their data processing. There is a demand from users to have image processing software at home to do further data processing; in addition, neutron computed tomography is now being introduced even at smaller and older reactors. Operators need to show a first working tomography setup before they can obtain a budget to build an advanced tomography system. Several packages are available on the web for free; however, these have been developed for X-rays or synchrotron radiation and are not immediately useable for neutron computed tomography. Three reconstruction packages and three 3D-viewers have been identified and used even for Gigabyte datasets. This paper is not a scientific publication in the classic sense, but is intended as a review to provide searchable help to make the described packages usable for the tomography community. It presents the necessary additional preprocessing in ImageJ, some workarounds for bugs in the software, and undocumented or badly documented parameters that need to be adapted for neutron computed tomography. The result is a slightly complicated, but surprisingly high-quality path to neutron computed tomography images in 3D, but not a replacement for the even more powerful commercial software mentioned above.

  10. Additive Manufacturing of a Microbial Fuel Cell—A detailed study

    PubMed Central

    Calignano, Flaviana; Tommasi, Tonia; Manfredi, Diego; Chiolerio, Alessandro

    2015-01-01

    In contemporary society we observe an everlasting permeation of electron devices, smartphones, portable computing tools. The tiniest living organisms on Earth could become the key to address this challenge: energy generation by bacterial processes from renewable stocks/waste through devices such as microbial fuel cells (MFCs). However, the application of this solution was limited by a moderately low efficiency. We explored the limits, if any, of additive manufacturing (AM) technology to fabricate a fully AM-based powering device, exploiting low density, open porosities able to host the microbes, systems easy to fuel continuously and to run safely. We obtained an optimal energy recovery close to 3 kWh m−3 per day that can power sensors and low-power appliances, allowing data processing and transmission from remote/harsh environments. PMID:26611142

  11. Additive Manufacturing of a Microbial Fuel Cell—A detailed study

    NASA Astrophysics Data System (ADS)

    Calignano, Flaviana; Tommasi, Tonia; Manfredi, Diego; Chiolerio, Alessandro

    2015-11-01

    In contemporary society we observe an everlasting permeation of electron devices, smartphones, portable computing tools. The tiniest living organisms on Earth could become the key to address this challenge: energy generation by bacterial processes from renewable stocks/waste through devices such as microbial fuel cells (MFCs). However, the application of this solution was limited by a moderately low efficiency. We explored the limits, if any, of additive manufacturing (AM) technology to fabricate a fully AM-based powering device, exploiting low density, open porosities able to host the microbes, systems easy to fuel continuously and to run safely. We obtained an optimal energy recovery close to 3 kWh m-3 per day that can power sensors and low-power appliances, allowing data processing and transmission from remote/harsh environments.

  12. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  13. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  14. Green Secure Processors: Towards Power-Efficient Secure Processor Design

    NASA Astrophysics Data System (ADS)

    Chhabra, Siddhartha; Solihin, Yan

    With the increasing wealth of digital information stored on computer systems today, security issues have become increasingly important. In addition to attacks targeting the software stack of a system, hardware attacks have become equally likely. Researchers have proposed Secure Processor Architectures which utilize hardware mechanisms for memory encryption and integrity verification to protect the confidentiality and integrity of data and computation, even from sophisticated hardware attacks. While there have been many works addressing performance and other system level issues in secure processor design, power issues have largely been ignored. In this paper, we first analyze the sources of power (energy) increase in different secure processor architectures. We then present a power analysis of various secure processor architectures in terms of their increase in power consumption over a base system with no protection and then provide recommendations for designs that offer the best balance between performance and power without compromising security. We extend our study to the embedded domain as well. We also outline the design of a novel hybrid cryptographic engine that can be used to minimize the power consumption for a secure processor. We believe that if secure processors are to be adopted in future systems (general purpose or embedded), it is critically important that power issues are considered in addition to performance and other system level issues. To the best of our knowledge, this is the first work to examine the power implications of providing hardware mechanisms for security.

  15. Re-Form: FPGA-Powered True Codesign Flow for High-Performance Computing In The Post-Moore Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappello, Franck; Yoshii, Kazutomo; Finkel, Hal

    Multicore scaling will end soon because of practical power limits. Dark silicon is becoming a major issue even more than the end of Moore’s law. In the post-Moore era, the energy efficiency of computing will be a major concern. FPGAs could be a key to maximizing the energy efficiency. In this paper we address severe challenges in the adoption of FPGA in HPC and describe “Re-form,” an FPGA-powered codesign flow.

  16. AA9int: SNP Interaction Pattern Search Using Non-Hierarchical Additive Model Set.

    PubMed

    Lin, Hui-Yi; Huang, Po-Yu; Chen, Dung-Tsa; Tung, Heng-Yuan; Sellers, Thomas A; Pow-Sang, Julio; Eeles, Rosalind; Easton, Doug; Kote-Jarai, Zsofia; Amin Al Olama, Ali; Benlloch, Sara; Muir, Kenneth; Giles, Graham G; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A; Schleutker, Johanna; Nordestgaard, Børge G; Travis, Ruth C; Hamdy, Freddie; Neal, David E; Pashayan, Nora; Khaw, Kay-Tee; Stanford, Janet L; Blot, William J; Thibodeau, Stephen N; Maier, Christiane; Kibel, Adam S; Cybulski, Cezary; Cannon-Albright, Lisa; Brenner, Hermann; Kaneva, Radka; Batra, Jyotsna; Teixeira, Manuel R; Pandha, Hardev; Lu, Yong-Jie; Park, Jong Y

    2018-06-07

    The use of single nucleotide polymorphism (SNP) interactions to predict complex diseases is getting more attention during the past decade, but related statistical methods are still immature. We previously proposed the SNP Interaction Pattern Identifier (SIPI) approach to evaluate 45 SNP interaction patterns/patterns. SIPI is statistically powerful but suffers from a large computation burden. For large-scale studies, it is necessary to use a powerful and computation-efficient method. The objective of this study is to develop an evidence-based mini-version of SIPI as the screening tool or solitary use and to evaluate the impact of inheritance mode and model structure on detecting SNP-SNP interactions. We tested two candidate approaches: the 'Five-Full' and 'AA9int' method. The Five-Full approach is composed of the five full interaction models considering three inheritance modes (additive, dominant and recessive). The AA9int approach is composed of nine interaction models by considering non-hierarchical model structure and the additive mode. Our simulation results show that AA9int has similar statistical power compared to SIPI and is superior to the Five-Full approach, and the impact of the non-hierarchical model structure is greater than that of the inheritance mode in detecting SNP-SNP interactions. In summary, it is recommended that AA9int is a powerful tool to be used either alone or as the screening stage of a two-stage approach (AA9int+SIPI) for detecting SNP-SNP interactions in large-scale studies. The 'AA9int' and 'parAA9int' functions (standard and parallel computing version) are added in the SIPI R package, which is freely available at https://linhuiyi.github.io/LinHY_Software/. hlin1@lsuhsc.edu. Supplementary data are available at Bioinformatics online.

  17. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  18. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  19. Quantum Computation

    NASA Astrophysics Data System (ADS)

    Aharonov, Dorit

    In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.

  20. The use of wireless laptop computers for computer-assisted learning in pharmacokinetics.

    PubMed

    Munar, Myrna Y; Singh, Harleen; Belle, Donna; Brackett, Carolyn C; Earle, Sandra B

    2006-02-15

    To implement computer-assisted learning workshops into pharmacokinetics courses in a doctor of pharmacy (PharmD) program. Workshops were designed for students to utilize computer software programs on laptop computers to build pharmacokinetic models to predict drug concentrations resulting from various dosage regimens. In addition, students were able to visualize through graphing programs how altering different parameters changed drug concentration-time curves. Surveys were conducted to measure students' attitudes toward computer technology before and after implementation. Finally, traditional examinations were used to evaluate student learning. Doctor of pharmacy students responded favorably to the use of wireless laptop computers in problem-based pharmacokinetic workshops. Eighty-eight percent (n = 61/69) and 82% (n = 55/67) of PharmD students completed surveys before and after computer implementation, respectively. Prior to implementation, 95% of students agreed that computers would enhance learning in pharmacokinetics. After implementation, 98% of students strongly agreed (p < 0.05) that computers enhanced learning. Examination results were significantly higher after computer implementation (89% with computers vs. 84% without computers; p = 0.01). Implementation of wireless laptop computers in a pharmacokinetic course enabled students to construct their own pharmacokinetic models that could respond to changing parameters. Students had greater comprehension and were better able to interpret results and provide appropriate recommendations. Computer-assisted pharmacokinetic techniques can be powerful tools when making decisions about drug therapy.

  1. The Use of Wireless Laptop Computers for Computer-Assisted Learning in Pharmacokinetics

    PubMed Central

    Munar, Myrna Y.; Singh, Harleen; Belle, Donna; Brackett, Carolyn C.; Earle, Sandra B.

    2006-01-01

    Objective To implement computer-assisted learning workshops into pharmacokinetics courses in a doctor of pharmacy (PharmD) program. Design Workshops were designed for students to utilize computer software programs on laptop computers to build pharmacokinetic models to predict drug concentrations resulting from various dosage regimens. In addition, students were able to visualize through graphing programs how altering different parameters changed drug concentration-time curves. Surveys were conducted to measure students’ attitudes toward computer technology before and after implementation. Finally, traditional examinations were used to evaluate student learning. Assessment Doctor of pharmacy students responded favorably to the use of wireless laptop computers in problem-based pharmacokinetic workshops. Eighty-eight percent (n = 61/69) and 82% (n = 55/67) of PharmD students completed surveys before and after computer implementation, respectively. Prior to implementation, 95% of students agreed that computers would enhance learning in pharmacokinetics. After implementation, 98% of students strongly agreed (p < 0.05) that computers enhanced learning. Examination results were significantly higher after computer implementation (89% with computers vs. 84% without computers; p = 0.01). Conclusion Implementation of wireless laptop computers in a pharmacokinetic course enabled students to construct their own pharmacokinetic models that could respond to changing parameters. Students had greater comprehension and were better able to interpret results and provide appropriate recommendations. Computer-assisted pharmacokinetic techniques can be powerful tools when making decisions about drug therapy. PMID:17136147

  2. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  3. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  4. CONC/11: A computer program for calculating the performance of dish-type solar thermal collectors and power systems

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1984-01-01

    The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.

  5. Development of a Low-Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high-efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-the-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations. Additionally, one of the configurations was built and tested at GRC, and the experimental data is compared with the predictions.

  6. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  7. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  8. A density functional theory study of the role of functionalized graphene particles as effective additives in power cable insulation

    PubMed Central

    Song, Shuwei; Zhao, Hong; Zheng, Xiaonan; Zhang, Hui; Wang, Ying; Han, Baozhong

    2018-01-01

    The role of a series of functionalized graphene additives in power cable insulation in suppressing the growth of electrical treeing and preventing the degradation of the polymer matrix has been investigated by density functional theory calculations. Bader charge analysis indicates that pristine, doped or defect graphene could effectively capture hot electrons to block their attack on cross-linked polyethylene (XLPE) because of the π–π conjugated unsaturated structures. Further exploration of the electronic properties in the interfacial region between the additives and XLPE shows that N-doped single-vacancy graphene, graphene oxide and B-, N-, Si- or P-doped graphene oxide have relatively strong physical interaction with XLPE to restrict its mobility and rather weak chemical activity to prevent the cleavage of the C–H or C–C bond, suggesting that they are all potential candidates as effective additives. The understanding of the features of functionalized graphene additives in trapping electrons and interfacial interaction will assist in the screening of promising additives as voltage stabilizers in power cables. PMID:29515821

  9. A density functional theory study of the role of functionalized graphene particles as effective additives in power cable insulation.

    PubMed

    Song, Shuwei; Zhao, Hong; Zheng, Xiaonan; Zhang, Hui; Liu, Yang; Wang, Ying; Han, Baozhong

    2018-02-01

    The role of a series of functionalized graphene additives in power cable insulation in suppressing the growth of electrical treeing and preventing the degradation of the polymer matrix has been investigated by density functional theory calculations. Bader charge analysis indicates that pristine, doped or defect graphene could effectively capture hot electrons to block their attack on cross-linked polyethylene (XLPE) because of the π-π conjugated unsaturated structures. Further exploration of the electronic properties in the interfacial region between the additives and XLPE shows that N-doped single-vacancy graphene, graphene oxide and B-, N-, Si- or P-doped graphene oxide have relatively strong physical interaction with XLPE to restrict its mobility and rather weak chemical activity to prevent the cleavage of the C-H or C-C bond, suggesting that they are all potential candidates as effective additives. The understanding of the features of functionalized graphene additives in trapping electrons and interfacial interaction will assist in the screening of promising additives as voltage stabilizers in power cables.

  10. The virtual digital nuclear power plant: A modern tool for supporting the lifecycle of VVER-based nuclear power units

    NASA Astrophysics Data System (ADS)

    Arkadov, G. V.; Zhukavin, A. P.; Kroshilin, A. E.; Parshikov, I. A.; Solov'ev, S. L.; Shishov, A. V.

    2014-10-01

    The article describes the "Virtual Digital VVER-Based Nuclear Power Plant" computerized system comprising a totality of verified initial data (sets of input data for a model intended for describing the behavior of nuclear power plant (NPP) systems in design and emergency modes of their operation) and a unified system of new-generation computation codes intended for carrying out coordinated computation of the variety of physical processes in the reactor core and NPP equipment. Experiments with the demonstration version of the "Virtual Digital VVER-Based NPP" computerized system has shown that it is in principle possible to set up a unified system of computation codes in a common software environment for carrying out interconnected calculations of various physical phenomena at NPPs constructed according to the standard AES-2006 project. With the full-scale version of the "Virtual Digital VVER-Based NPP" computerized system put in operation, the concerned engineering, design, construction, and operating organizations will have access to all necessary information relating to the NPP power unit project throughout its entire lifecycle. The domestically developed commercial-grade software product set to operate as an independently operating application to the project will bring about additional competitive advantages in the modern market of nuclear power technologies.

  11. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  12. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  13. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants

    PubMed Central

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-01-01

    Motivation: Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test—a score test—with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene–gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. Results: After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test—up to 23 more associations—whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene–gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Availability: Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. Contact: heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25075117

  14. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants.

    PubMed

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-11-15

    Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  15. Water and Power Systems Co-optimization under a High Performance Computing Framework

    NASA Astrophysics Data System (ADS)

    Xuan, Y.; Arumugam, S.; DeCarolis, J.; Mahinthakumar, K.

    2016-12-01

    Water and energy systems optimizations are traditionally being treated as two separate processes, despite their intrinsic interconnections (e.g., water is used for hydropower generation, and thermoelectric cooling requires a large amount of water withdrawal). Given the challenges of urbanization, technology uncertainty and resource constraints, and the imminent threat of climate change, a cyberinfrastructure is needed to facilitate and expedite research into the complex management of these two systems. To address these issues, we developed a High Performance Computing (HPC) framework for stochastic co-optimization of water and energy resources to inform water allocation and electricity demand. The project aims to improve conjunctive management of water and power systems under climate change by incorporating improved ensemble forecast models of streamflow and power demand. First, by downscaling and spatio-temporally disaggregating multimodel climate forecasts from General Circulation Models (GCMs), temperature and precipitation forecasts are obtained and input into multi-reservoir and power systems models. Extended from Optimus (Optimization Methods for Universal Simulators), the framework drives the multi-reservoir model and power system model, Temoa (Tools for Energy Model Optimization and Analysis), and uses Particle Swarm Optimization (PSO) algorithm to solve high dimensional stochastic problems. The utility of climate forecasts on the cost of water and power systems operations is assessed and quantified based on different forecast scenarios (i.e., no-forecast, multimodel forecast and perfect forecast). Analysis of risk management actions and renewable energy deployments will be investigated for the Catawba River basin, an area with adequate hydroclimate predicting skill and a critical basin with 11 reservoirs that supplies water and generates power for both North and South Carolina. Further research using this scalable decision supporting framework will provide

  16. Multiple-Swarm Ensembles: Improving the Predictive Power and Robustness of Predictive Models and Its Use in Computational Biology.

    PubMed

    Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark

    2018-01-01

    Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work, we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.

  17. Computational fluid dynamics study on mixing mode and power consumption in anaerobic mono- and co-digestion.

    PubMed

    Zhang, Yuan; Yu, Guangren; Yu, Liang; Siddhu, Muhammad Abdul Hanan; Gao, Mengjiao; Abdeltawab, Ahmed A; Al-Deyab, Salem S; Chen, Xiaochun

    2016-03-01

    Computational fluid dynamics (CFD) was applied to investigate mixing mode and power consumption in anaerobic mono- and co-digestion. Cattle manure (CM) and corn stover (CS) were used as feedstock and stirred tank reactor (STR) was used as digester. Power numbers obtained by the CFD simulation were compared with those from the experimental correlation. Results showed that the standard k-ε model was more appropriate than other turbulence models. A new index, net power production instead of gas production, was proposed to optimize feedstock ratio for anaerobic co-digestion. Results showed that flow field and power consumption were significantly changed in co-digestion of CM and CS compared with those in mono-digestion of either CM or CS. For different mixing modes, the optimum feedstock ratio for co-digestion changed with net power production. The best option of CM/CS ratio for continuous mixing, intermittent mixing I, and intermittent mixing II were 1:1, 1:1 and 1:3, respectively. Copyright © 2016. Published by Elsevier Ltd.

  18. Computer study of emergency shutdowns of a 60-kilowatt reactor Brayton space power system

    NASA Technical Reports Server (NTRS)

    Tew, R. C.; Jefferies, K. S.

    1974-01-01

    A digital computer study of emergency shutdowns of a 60-kWe reactor Brayton power system was conducted. Malfunctions considered were (1) loss of reactor coolant flow, (2) loss of Brayton system gas flow, (3)turbine overspeed, and (4) a reactivity insertion error. Loss of reactor coolant flow was the most serious malfunction for the reactor. Methods for moderating the reactor transients due to this malfunction are considered.

  19. The Effects of Computer-Assisted Instruction on Student Achievement in Addition and Subtraction at First Grade Level.

    ERIC Educational Resources Information Center

    Spivey, Patsy M.

    This study was conducted to determine whether the traditional classroom approach to instruction involving the addition and subtraction of number facts (digits 0-6) is more or less effective than the traditional classroom approach plus a commercially-prepared computer game. A pretest-posttest control group design was used with two groups of first…

  20. The power of PowerPoint.

    PubMed

    Niamtu , J

    2001-08-01

    Carousel slide presentations have been used for academic and clinical presentations since the late 1950s. However, advances in computer technology have caused a paradigm shift, and digital presentations are quickly becoming standard for clinical presentations. The advantages of digital presentations include cost savings; portability; easy updating capability; Internet access; multimedia functions, such as animation, pictures, video, and sound; and customization to augment audience interest and attention. Microsoft PowerPoint has emerged as the most popular digital presentation software and is currently used by many practitioners with and without significant computer expertise. The user-friendly platform of PowerPoint enables even the novice presenter to incorporate digital presentations into his or her profession. PowerPoint offers many advanced options that, with a minimal investment of time, can be used to create more interactive and professional presentations for lectures, patient education, and marketing. Examples of advanced PowerPoint applications are presented in a stepwise manner to unveil the full power of PowerPoint. By incorporating these techniques, medical practitioners can easily personalize, customize, and enhance their PowerPoint presentations. Complications, pitfalls, and caveats are discussed to detour and prevent misadventures in digital presentations. Relevant Web sites are listed to further update, customize, and communicate PowerPoint techniques.

  1. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  2. Enhancing Specific Energy and Power in Asymmetric Supercapacitors - A Synergetic Strategy based on the Use of Redox Additive Electrolytes

    PubMed Central

    Singh, Arvinder; Chandra, Amreesh

    2016-01-01

    The strategy of using redox additive electrolyte in combination with multiwall carbon nanotubes/metal oxide composites leads to a substantial improvements in the specific energy and power of asymmetric supercapacitors (ASCs). When the pure electrolyte is optimally modified with a redox additive viz., KI, ~105% increase in the specific energy is obtained with good cyclic stability over 3,000 charge-discharge cycles and ~14.7% capacitance fade. This increase is a direct consequence of the iodine/iodide redox pairs that strongly modifies the faradaic and non-faradaic type reactions occurring on the surface of the electrodes. Contrary to what is shown in few earlier reports, it is established that indiscriminate increase in the concentration of redox additives will leads to performance loss. Suitable explanations are given based on theoretical laws. The specific energy or power values being reported in the fabricated ASCs are comparable or higher than those reported in ASCs based on toxic acetonitrile or expensive ionic liquids. The paper shows that the use of redox additive is economically favorable strategy for obtaining cost effective and environmentally friendly ASCs. PMID:27184260

  3. Evolutionary growth for Space Station Freedom electrical power system

    NASA Technical Reports Server (NTRS)

    Marshall, Matthew Fisk; Mclallin, Kerry; Zernic, Mike

    1989-01-01

    Over an operational lifetime of at least 30 yr, Space Station Freedom will encounter increased Space Station user requirements and advancing technologies. The Space Station electrical power system is designed with the flexibility to accommodate these emerging technologies and expert systems and is being designed with the necessary software hooks and hardware scars to accommodate increased growth demand. The electrical power system is planned to grow from the initial 75 kW up to 300 kW. The Phase 1 station will utilize photovoltaic arrays to produce the electrical power; however, for growth to 300 kW, solar dynamic power modules will be utilized. Pairs of 25 kW solar dynamic power modules will be added to the station to reach the power growth level. The addition of solar dynamic power in the growth phase places constraints in the initial Space Station systems such as guidance, navigation, and control, external thermal, truss structural stiffness, computational capabilities and storage, which must be planned-in, in order to facilitate the addition of the solar dynamic modules.

  4. Power API Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-12-04

    The software serves two purposes. The first purpose of the software is to prototype the Sandia High Performance Computing Power Application Programming Interface Specification effort. The specification can be found at http://powerapi.sandia.gov . Prototypes of the specification were developed in parallel with the development of the specification. Release of the prototype will be instructive to anyone who intends to implement the specification. More specifically, our vendor collaborators will benefit from the availability of the prototype. The second is in direct support of the PowerInsight power measurement device, which was co-developed with Penguin Computing. The software provides a cluster wide measurementmore » capability enabled by the PowerInsight device. The software can be used by anyone who purchases a PowerInsight device. The software will allow the user to easily collect power and energy information of a node that is instrumented with PowerInsight. The software can also be used as an example prototype implementation of the High Performance Computing Power Application Programming Interface Specification.« less

  5. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  6. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  7. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    PubMed Central

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  8. Digital computer study of nuclear reactor thermal transients during startup of 60-kWe Brayton power conversion system

    NASA Technical Reports Server (NTRS)

    Jefferies, K. S.; Tew, R. C.

    1974-01-01

    A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.

  9. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY PROCEDURES FOR TVA POWER SECURITIES ISSUED THROUGH THE FEDERAL RESERVE BANKS § 1314.10 Additional provisions...

  10. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY PROCEDURES FOR TVA POWER SECURITIES ISSUED THROUGH THE FEDERAL RESERVE BANKS § 1314.10 Additional provisions...

  11. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  12. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Vinod

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly butmore » important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.« less

  13. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  14. IBM Cloud Computing Powering a Smarter Planet

    NASA Astrophysics Data System (ADS)

    Zhu, Jinzy; Fang, Xing; Guo, Zhe; Niu, Meng Hua; Cao, Fan; Yue, Shuang; Liu, Qin Yu

    With increasing need for intelligent systems supporting the world's businesses, Cloud Computing has emerged as a dominant trend to provide a dynamic infrastructure to make such intelligence possible. The article introduced how to build a smarter planet with cloud computing technology. First, it introduced why we need cloud, and the evolution of cloud technology. Secondly, it analyzed the value of cloud computing and how to apply cloud technology. Finally, it predicted the future of cloud in the smarter planet.

  15. Using additive manufacturing in accuracy evaluation of reconstructions from computed tomography.

    PubMed

    Smith, Erin J; Anstey, Joseph A; Venne, Gabriel; Ellis, Randy E

    2013-05-01

    Bone models derived from patient imaging and fabricated using additive manufacturing technology have many potential uses including surgical planning, training, and research. This study evaluated the accuracy of bone surface reconstruction of two diarthrodial joints, the hip and shoulder, from computed tomography. Image segmentation of the tomographic series was used to develop a three-dimensional virtual model, which was fabricated using fused deposition modelling. Laser scanning was used to compare cadaver bones, printed models, and intermediate segmentations. The overall bone reconstruction process had a reproducibility of 0.3 ± 0.4 mm. Production of the model had an accuracy of 0.1 ± 0.1 mm, while the segmentation had an accuracy of 0.3 ± 0.4 mm, indicating that segmentation accuracy was the key factor in reconstruction. Generally, the shape of the articular surfaces was reproduced accurately, with poorer accuracy near the periphery of the articular surfaces, particularly in regions with periosteum covering and where osteophytes were apparent.

  16. Predicting the Noise of High Power Fluid Targets Using Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Moore, Michael; Covrig Dusa, Silviu

    The 2.5 kW liquid hydrogen (LH2) target used in the Qweak parity violation experiment is the highest power LH2 target in the world and the first to be designed with Computational Fluid Dynamics (CFD) at Jefferson Lab. The Qweak experiment determined the weak charge of the proton by measuring the parity-violating elastic scattering asymmetry of longitudinally polarized electrons from unpolarized liquid hydrogen at small momentum transfer (Q2 = 0 . 025 GeV2). This target satisfied the design goals of < 1 % luminosity reduction and < 5 % contribution to the total asymmetry width (the Qweak target achieved 2 % or 55ppm). State of the art time dependent CFD simulations are being developed to improve the predictions of target noise on the time scale of the electron beam helicity period. These predictions will be bench-marked with the Qweak target data. This work is an essential component in future designs of very high power low noise targets like MOLLER (5 kW, target noise asymmetry contribution < 25 ppm) and MESA (4.5 kW).

  17. Using Self-regulation to Successfully Overcome the Negotiation Disadvantage of Low Power

    PubMed Central

    Jäger, Andreas; Loschelder, David D.; Friese, Malte

    2017-01-01

    A plethora of studies has demonstrated that low-power negotiators attain lower outcomes compared to high-power negotiators. We argue that this low-power disadvantage can be conceptualized as impaired goal attainment and that self-regulation can help to overcome it. Three experiments tested this assertion. In Study 1, low-power negotiators attained lower profits compared to their high-power opponents in a face-to-face negotiation. Negotiators who set themselves goals and those who additionally formed if-then plans prior to the negotiation overcame the low-power disadvantage. Studies 2 and 3 replicated these effects in computer-mediated negotiations: Low-power negotiators conceded more than high-power negotiators. Again, setting goals and forming additional if-then plans helped to counter the power disadvantage. Process analyses revealed that negotiators’ concession-making at the start of the negotiation mediated both the low-power disadvantage and the beneficial effects of self-regulation. The present findings show how the low-power disadvantage unfolds in negotiations and how self-regulatory techniques can help to overcome it. PMID:28382005

  18. Using Self-regulation to Successfully Overcome the Negotiation Disadvantage of Low Power.

    PubMed

    Jäger, Andreas; Loschelder, David D; Friese, Malte

    2017-01-01

    A plethora of studies has demonstrated that low-power negotiators attain lower outcomes compared to high-power negotiators. We argue that this low-power disadvantage can be conceptualized as impaired goal attainment and that self-regulation can help to overcome it. Three experiments tested this assertion. In Study 1, low-power negotiators attained lower profits compared to their high-power opponents in a face-to-face negotiation. Negotiators who set themselves goals and those who additionally formed if-then plans prior to the negotiation overcame the low-power disadvantage. Studies 2 and 3 replicated these effects in computer-mediated negotiations: Low-power negotiators conceded more than high-power negotiators. Again, setting goals and forming additional if-then plans helped to counter the power disadvantage. Process analyses revealed that negotiators' concession-making at the start of the negotiation mediated both the low-power disadvantage and the beneficial effects of self-regulation. The present findings show how the low-power disadvantage unfolds in negotiations and how self-regulatory techniques can help to overcome it.

  19. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  20. LSP 156, Low Power Embedded Analytics: FY15 Line Supported Information, Computation, and Exploitation Program

    DTIC Science & Technology

    2015-12-04

    from back-office big - data analytics to fieldable hot-spot systems providing storage-processing-communication services for off- grid sensors. Speed...and power efficiency are the key metrics. Current state-of-the art approaches for big - data aim toward scaling out to many computers to meet...pursued within Lincoln Laboratory as well as external sponsors. Our vision is to bring new capabilities in big - data and internet-of-things applications

  1. Development of a Low Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper, eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations, and compares the predictions with experimental data for one of the configurations that has been built and is currently being tested.

  2. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    NASA Astrophysics Data System (ADS)

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubenchik, A. M.

    2015-12-01

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  3. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    DOE PAGES

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; ...

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In thismore » study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.« less

  4. Direct Methanol Fuel Cell Power Supply For All-Day True Wireless Mobile Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Wells

    PolyFuel has developed state-of-the-art portable fuel cell technology for the portable computing market. A novel approach to passive water recycling within the MEA has led to significant system simplification and size reduction. Miniature stack technology with very high area utilization and minimalist seals has been developed. A highly integrated balance of plant with very low parasitic losses has been constructed around the new stack design. Demonstration prototype systems integrated with laptop computers have been shown in recent months to leading OEM computer manufacturers. PolyFuel intends to provide this technology to its customers as a reference design as a means ofmore » accelerating the commercialization of portable fuel cell technology. The primary goal of the project was to match the energy density of a commercial lithium ion battery for laptop computers. PolyFuel made large strides against this goal and has now demonstrated 270 Wh/liter compared with lithium ion energy densities of 300 Wh/liter. Further, more incremental, improvements in energy density are envisioned with an additional 20-30% gains possible in each of the next two years given further research and development.« less

  5. Multiphysics Computational Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational heat transfer methodology to predict thermal, fluid, and hydrogen environments for a hypothetical solid-core, nuclear thermal engine - the Small Engine. In addition, the effects of power profile and hydrogen conversion on heat transfer efficiency and thrust performance were also investigated. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics platform, while formulations of conjugate heat transfer were implemented to describe the heat transfer from solid to hydrogen inside the solid-core reactor. The computational domain covers the entire thrust chamber so that the afore-mentioned heat transfer effects impact the thrust performance directly. The result shows that the computed core-exit gas temperature, specific impulse, and core pressure drop agree well with those of design data for the Small Engine. Finite-rate chemistry is very important in predicting the proper energy balance as naturally occurring hydrogen decomposition is endothermic. Locally strong hydrogen conversion associated with centralized power profile gives poor heat transfer efficiency and lower thrust performance. On the other hand, uniform hydrogen conversion associated with a more uniform radial power profile achieves higher heat transfer efficiency, and higher thrust performance.

  6. Program manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 1 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.

  7. The effectiveness of power-generating complexes constructed on the basis of nuclear power plants combined with additional sources of energy determined taking risk factors into account

    NASA Astrophysics Data System (ADS)

    Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.

    2015-02-01

    The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.

  8. Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites

    NASA Astrophysics Data System (ADS)

    2002-03-01

    Many satellites are an integrated collection of sensors and actuators that require dedicated real-time control. For single processor systems, additional sensors require an increase in computing power and speed to provide the multi-tasking capability needed to service each sensor. Faster processors cost more and consume more power, which taxes a satellite's power resources and may lead to shorter satellite lifetimes. An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator. The design of microdot, a four-bit microcontroller for distributed low-end computing, is presented. The design is based on previous research completed at the Space Electronics Branch, Air Force Research Laboratory (AFRL/VSSE) at Kirtland AFB, NM, and the Air Force Institute of Technology at Wright-Patterson AFB, OH. The Microdot has 29 instructions and a 1K x 4 instruction memory. The distributed computing architecture is based on the Philips Semiconductor I2C Serial Bus Protocol. A prototype was implemented and tested using an Altera Field Programmable Gate Array (FPGA). The prototype was operable to 9.1 MHz. The design was targeted for fabrication in a radiation-hardened-by-design gate-array cell library for the TSMC 0.35 micrometer CMOS process.

  9. Carbon nanotube computer.

    PubMed

    Shulaker, Max M; Hills, Gage; Patil, Nishant; Wei, Hai; Chen, Hong-Yu; Wong, H-S Philip; Mitra, Subhasish

    2013-09-26

    The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy-delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. Owing to substantial fundamental imperfections inherent in CNTs, however, only very basic circuit blocks have been demonstrated. Here we show how these imperfections can be overcome, and demonstrate the first computer built entirely using CNT-based transistors. The CNT computer runs an operating system that is capable of multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we implement 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This experimental demonstration is the most complex carbon-based electronic system yet realized. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next generation of highly energy-efficient electronic systems.

  10. Computational methods in drug discovery

    PubMed Central

    Leelananda, Sumudu P

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341

  11. Computational methods in drug discovery.

    PubMed

    Leelananda, Sumudu P; Lindert, Steffen

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  12. Application of Blind Quantum Computation to Two-Party Quantum Computation

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-06-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  13. Application of Blind Quantum Computation to Two-Party Quantum Computation

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-03-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  14. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  15. Contextuality supplies the 'magic' for quantum computation.

    PubMed

    Howard, Mark; Wallman, Joel; Veitch, Victor; Emerson, Joseph

    2014-06-19

    Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via 'magic state' distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple 'hidden variable' model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms.

  16. Thermodynamics of quasideterministic digital computers

    NASA Astrophysics Data System (ADS)

    Chu, Dominique

    2018-02-01

    A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.

  17. MEMS-Based Power Generation Techniques for Implantable Biosensing Applications

    PubMed Central

    Lueke, Jonathan; Moussa, Walied A.

    2011-01-01

    Implantable biosensing is attractive for both medical monitoring and diagnostic applications. It is possible to monitor phenomena such as physical loads on joints or implants, vital signs, or osseointegration in vivo and in real time. Microelectromechanical (MEMS)-based generation techniques can allow for the autonomous operation of implantable biosensors by generating electrical power to replace or supplement existing battery-based power systems. By supplementing existing battery-based power systems for implantable biosensors, the operational lifetime of the sensor is increased. In addition, the potential for a greater amount of available power allows additional components to be added to the biosensing module, such as computational and wireless and components, improving functionality and performance of the biosensor. Photovoltaic, thermovoltaic, micro fuel cell, electrostatic, electromagnetic, and piezoelectric based generation schemes are evaluated in this paper for applicability for implantable biosensing. MEMS-based generation techniques that harvest ambient energy, such as vibration, are much better suited for implantable biosensing applications than fuel-based approaches, producing up to milliwatts of electrical power. High power density MEMS-based approaches, such as piezoelectric and electromagnetic schemes, allow for supplemental and replacement power schemes for biosensing applications to improve device capabilities and performance. In addition, this may allow for the biosensor to be further miniaturized, reducing the need for relatively large batteries with respect to device size. This would cause the implanted biosensor to be less invasive, increasing the quality of care received by the patient. PMID:22319362

  18. MEMS-based power generation techniques for implantable biosensing applications.

    PubMed

    Lueke, Jonathan; Moussa, Walied A

    2011-01-01

    Implantable biosensing is attractive for both medical monitoring and diagnostic applications. It is possible to monitor phenomena such as physical loads on joints or implants, vital signs, or osseointegration in vivo and in real time. Microelectromechanical (MEMS)-based generation techniques can allow for the autonomous operation of implantable biosensors by generating electrical power to replace or supplement existing battery-based power systems. By supplementing existing battery-based power systems for implantable biosensors, the operational lifetime of the sensor is increased. In addition, the potential for a greater amount of available power allows additional components to be added to the biosensing module, such as computational and wireless and components, improving functionality and performance of the biosensor. Photovoltaic, thermovoltaic, micro fuel cell, electrostatic, electromagnetic, and piezoelectric based generation schemes are evaluated in this paper for applicability for implantable biosensing. MEMS-based generation techniques that harvest ambient energy, such as vibration, are much better suited for implantable biosensing applications than fuel-based approaches, producing up to milliwatts of electrical power. High power density MEMS-based approaches, such as piezoelectric and electromagnetic schemes, allow for supplemental and replacement power schemes for biosensing applications to improve device capabilities and performance. In addition, this may allow for the biosensor to be further miniaturized, reducing the need for relatively large batteries with respect to device size. This would cause the implanted biosensor to be less invasive, increasing the quality of care received by the patient.

  19. Affordable Emerging Computer Hardware for Neuromorphic Computing Applications

    DTIC Science & Technology

    2011-09-01

    DATES COVERED (From - To) 4 . TITLE AND SUBTITLE AFFORDABLE EMERGING COMPUTER HARDWARE FOR NEUROMORPHIC COMPUTING APPLICATIONS 5a. CONTRACT NUMBER...speedup over software [3, 4 ]. 3 Table 1 shows a comparison of the computing performance, communication performance, power consumption...time is probably 5 frames per second, corresponding to 5 saccades. III. RESULTS AND DISCUSSION The use of IBM Cell-BE technology (Sony PlayStation

  20. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  1. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less

  2. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  3. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  4. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional information. 33.10 Section 33.10 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee...

  5. Power combining in an array of microwave power rectifiers

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.; Borrego, J. M.

    1979-01-01

    This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.

  6. Rotating Detonation Combustion: A Computational Study for Stationary Power Generation

    NASA Astrophysics Data System (ADS)

    Escobar, Sergio

    The increased availability of gaseous fossil fuels in The US has led to the substantial growth of stationary Gas Turbine (GT) usage for electrical power generation. In fact, from 2013 to 2104, out of the 11 Tera Watts-hour per day produced from fossil fuels, approximately 27% was generated through the combustion of natural gas in stationary GT. The thermodynamic efficiency for simple-cycle GT has increased from 20% to 40% during the last six decades, mainly due to research and development in the fields of combustion science, material science and machine design. However, additional improvements have become more costly and more difficult to obtain as technology is further refined. An alternative to improve GT thermal efficiency is the implementation of a combustion regime leading to pressure-gain; rather than pressure loss across the combustor. One concept being considered for such purpose is Rotating Detonation Combustion (RDC). RDC refers to a combustion regime in which a detonation wave propagates continuously in the azimuthal direction of a cylindrical annular chamber. In RDC, the fuel and oxidizer, injected from separated streams, are mixed near the injection plane and are then consumed by the detonation front traveling inside the annular gap of the combustion chamber. The detonation products then expand in the azimuthal and axial direction away from the detonation front and exit through the combustion chamber outlet. In the present study Computational Fluid Dynamics (CFD) is used to predict the performance of Rotating Detonation Combustion (RDC) at operating conditions relevant to GT applications. As part of this study, a modeling strategy for RDC simulations was developed. The validation of the model was performed using benchmark cases with different levels of complexity. First, 2D simulations of non-reactive shock tube and detonation tubes were performed. The numerical predictions that were obtained using different modeling parameters were compared with

  7. Is thinking computable?

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    Strong artificial intelligence claims that conscious thought can arise in computers containing the right algorithms even though none of the programs or components of those computers understand which is going on. As proof, it asserts that brains are finite webs of neurons, each with a definite function governed by the laws of physics; this web has a set of equations that can be solved (or simulated) by a sufficiently powerful computer. Strong AI claims the Turing test as a criterion of success. A recent debate in Scientific American concludes that the Turing test is not sufficient, but leaves intact the underlying premise that thought is a computable process. The recent book by Roger Penrose, however, offers a sharp challenge, arguing that the laws of quantum physics may govern mental processes and that these laws may not be computable. In every area of mathematics and physics, Penrose finds evidence of nonalgorithmic human activity and concludes that mental processes are inherently more powerful than computational processes.

  8. Markov chain algorithms: a template for building future robust low-power systems

    PubMed Central

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  9. Computer usage and national energy consumption: Results from a field-metering study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desroches, Louis-Benoit; Fuchs, Heidi; Greenblatt, Jeffery

    The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Baymore » Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated

  10. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  11. An improved computer program for calculating the theoretical performance parameters of a propeller type wind turbine. An appendix to the final report on feasibility of using wind power to pump irrigation water (Texas). [PROP Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barieau, R.E.

    1977-03-01

    The PROP Program of Wilson and Lissaman has been modified by adding the Newton-Raphson Method and a Step Wise Search Method, as options for the method of solution. In addition, an optimization method is included. Twist angles, tip speed ratio and the pitch angle may be varied to produce maximum power coefficient. The computer program listing is presented along with sample input and output data. Further improvements to the program are discussed.

  12. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    PubMed

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  13. The Glass Computer

    ERIC Educational Resources Information Center

    Paesler, M. A.

    2009-01-01

    Digital computers use different kinds of memory, each of which is either volatile or nonvolatile. On most computers only the hard drive memory is nonvolatile, i.e., it retains all information stored on it when the power is off. When a computer is turned on, an operating system stored on the hard drive is loaded into the computer's memory cache and…

  14. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  15. Automation in the Space Station module power management and distribution Breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Lollar, Louis F.

    1990-01-01

    The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.

  16. A study on the applications of AI in finishing of additive manufacturing parts

    NASA Astrophysics Data System (ADS)

    Fathima Patham, K.

    2017-06-01

    Artificial intelligent and computer simulation are the technological powerful tools for solving complex problems in the manufacturing industries. Additive Manufacturing is one of the powerful manufacturing techniques that provide design flexibilities to the products. The products with complex shapes are directly manufactured without the need of any machining and tooling using Additive Manufacturing. However, the main drawback of the components produced using the Additive Manufacturing processes is the quality of the surfaces. This study aims to minimize the defects caused during Additive Manufacturing with the aid of Artificial Intelligence. The developed AI system has three layers, each layer is trying to eliminate or minimize the production errors. The first layer of the AI system optimizes the digitization of the 3D CAD model of the product and hence reduces the stair case errors. The second layer of the AI system optimizes the 3D printing machine parameters in order to eliminate the warping effect. The third layer of AI system helps to choose the surface finishing technique suitable for the printed component based on the Degree of Complexity of the product and the material. The efficiency of the developed AI system was examined on the functional parts such as gears.

  17. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  18. Medical imaging and registration in computer assisted surgery.

    PubMed

    Simon, D A; Lavallée, S

    1998-09-01

    Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.

  19. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  20. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  1. Impact of Classroom Computer Use on Computer Anxiety.

    ERIC Educational Resources Information Center

    Lambert, Matthew E.; And Others

    Increasing use of computer programs for undergraduate psychology education has raised concern over the impact of computer anxiety on educational performance. Additionally, some researchers have indicated that classroom computer use can exacerbate pre-existing computer anxiety. To evaluate the relationship between in-class computer use and computer…

  2. Computationally Driven Two-Dimensional Materials Design: What Is Next?

    DOE PAGES

    Pan, Jie; Lany, Stephan; Qi, Yue

    2017-07-17

    Two-dimensional (2D) materials offer many key advantages to innovative applications, such as spintronics and quantum information processing. Theoretical computations have accelerated 2D materials design. In this issue of ACS Nano, Kumar et al. report that ferromagnetism can be achieved in functionalized nitride MXene based on first-principles calculations. Their computational results shed light on a potentially vast group of materials for the realization of 2D magnets. In this Perspective, we briefly summarize the promising properties of 2D materials and the role theory has played in predicting these properties. Additionally, we discuss challenges and opportunities to boost the power of computation formore » the prediction of the 'structure-property-process (synthesizability)' relationship of 2D materials.« less

  3. Computationally guided discovery of thermoelectric materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorai, Prashun; Stevanović, Vladan; Toberer, Eric S.

    The potential for advances in thermoelectric materials, and thus solid-state refrigeration and power generation, is immense. Progress so far has been limited by both the breadth and diversity of the chemical space and the serial nature of experimental work. In this Review, we discuss how recent computational advances are revolutionizing our ability to predict electron and phonon transport and scattering, as well as materials dopability, and we examine efficient approaches to calculating critical transport properties across large chemical spaces. When coupled with experimental feedback, these high-throughput approaches can stimulate the discovery of new classes of thermoelectric materials. Within smaller materialsmore » subsets, computations can guide the optimal chemical and structural tailoring to enhance materials performance and provide insight into the underlying transport physics. Beyond perfect materials, computations can be used for the rational design of structural and chemical modifications (such as defects, interfaces, dopants and alloys) to provide additional control on transport properties to optimize performance. Through computational predictions for both materials searches and design, a new paradigm in thermoelectric materials discovery is emerging.« less

  4. Computationally guided discovery of thermoelectric materials

    DOE PAGES

    Gorai, Prashun; Stevanović, Vladan; Toberer, Eric S.

    2017-08-22

    The potential for advances in thermoelectric materials, and thus solid-state refrigeration and power generation, is immense. Progress so far has been limited by both the breadth and diversity of the chemical space and the serial nature of experimental work. In this Review, we discuss how recent computational advances are revolutionizing our ability to predict electron and phonon transport and scattering, as well as materials dopability, and we examine efficient approaches to calculating critical transport properties across large chemical spaces. When coupled with experimental feedback, these high-throughput approaches can stimulate the discovery of new classes of thermoelectric materials. Within smaller materialsmore » subsets, computations can guide the optimal chemical and structural tailoring to enhance materials performance and provide insight into the underlying transport physics. Beyond perfect materials, computations can be used for the rational design of structural and chemical modifications (such as defects, interfaces, dopants and alloys) to provide additional control on transport properties to optimize performance. Through computational predictions for both materials searches and design, a new paradigm in thermoelectric materials discovery is emerging.« less

  5. Computational modeling of colorimetric primary transducer for metrological assurance in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Skliarov, Volodymyr

    2018-03-01

    Many additive manufacturing (AM) systems are based on laser technology. The advantage of laser technology is that it provides a high-intensity and high-collimation energy beam that can be controlled. Since AM requires that the material on each layer has to be solid or connected to the previous one, the energy of laser radiation is exactly the needed technical tool for the processing of the material. AM uses two types of laser processing: cutting and heating. One of the most popular (common) types of measurements in the field of laser metrology is the control of the energy parameters of the sources of laser radiation. At present, calorimeters provide the highest accuracy of absolute measurements of laser radiation in the power range from several watts to tens of kilowatts. The main elements that determine the accuracy of reproduction, maintenance and transfer of the unit of laser power are the primary measuring converters (PMCs), which are the part of the equipment of the national primary measurement standards of Ukraine. A significant contribution to the uncertainty budget of the primary measuring calorimetric converter is the unbalanced replacement of laser radiation by the heat flux that calibrates this converter. The heterogeneous internal structure of the calorimetric primary converter, the nonlinearity of processes occurring in it, and the multifactorial process of its calibration substantially complicate the development of primary measuring converters. The purpose of this paper is to simulate the thermal field of the primary converter for maximum reduction of the uncertainty of calibration. The presented research is a part of the scientific work that NSC "Institute of Metrology" carries out under COOMET and EMPIRE projects. The modeling was performed in the academic version of ANSYS.

  6. Computation Directorate 2008 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less

  7. Thermoelectric Power Generation from Lanthanum Strontium Titanium Oxide at Room Temperature through the Addition of Graphene.

    PubMed

    Lin, Yue; Norman, Colin; Srivastava, Deepanshu; Azough, Feridoon; Wang, Li; Robbins, Mark; Simpson, Kevin; Freer, Robert; Kinloch, Ian A

    2015-07-29

    The applications of strontium titanium oxide based thermoelectric materials are currently limited by their high operating temperatures of >700 °C. Herein, we show that the thermal operating window of lanthanum strontium titanium oxide (LSTO) can be reduced to room temperature by the addition of a small amount of graphene. This increase in operating performance will enable future applications such as generators in vehicles and other sectors. The LSTO composites incorporated one percent or less of graphene and were sintered under an argon/hydrogen atmosphere. The resultant materials were reduced and possessed a multiphase structure with nanosized grains. The thermal conductivity of the nanocomposites decreased upon the addition of graphene, whereas the electrical conductivity and power factor both increased significantly. These factors, together with a moderate Seebeck coefficient, meant that a high power factor of ∼2500 μWm(-1)K(-2) was reached at room temperature at a loading of 0.6 wt % graphene. The highest thermoelectric figure of merit (ZT) was achieved when 0.6 wt % graphene was added (ZT = 0.42 at room temperature and 0.36 at 750 °C), with >280% enhancement compared to that of pure LSTO. A preliminary 7-couple device was produced using bismuth strontium cobalt oxide/graphene-LSTO pucks. This device had a Seebeck coefficient of ∼1500 μV/K and an open voltage of 600 mV at a mean temperature of 219 °C.

  8. Investigation of mass transfer intensification under power ultrasound irradiation using 3D computational simulation: A comparative analysis.

    PubMed

    Sajjadi, Baharak; Asgharzadehahmadi, Seyedali; Asaithambi, Perumal; Raman, Abdul Aziz Abdul; Parthasarathy, Rajarathinam

    2017-01-01

    This paper aims at investigating the influence of acoustic streaming induced by low-frequency (24kHz) ultrasound irradiation on mass transfer in a two-phase system. The main objective is to discuss the possible mass transfer improvements under ultrasound irradiation. Three analyses were conducted: i) experimental analysis of mass transfer under ultrasound irradiation; ii) comparative analysis between the results of the ultrasound assisted mass transfer with that obtained from mechanically stirring; and iii) computational analysis of the systems using 3D CFD simulation. In the experimental part, the interactive effects of liquid rheological properties, ultrasound power and superficial gas velocity on mass transfer were investigated in two different sonicators. The results were then compared with that of mechanical stirring. In the computational part, the results were illustrated as a function of acoustic streaming behaviour, fluid flow pattern, gas/liquid volume fraction and turbulence in the two-phase system and finally the mass transfer coefficient was specified. It was found that additional turbulence created by ultrasound played the most important role on intensifying the mass transfer phenomena compared to that in stirred vessel. Furthermore, long residence time which depends on geometrical parameters is another key for mass transfer. The results obtained in the present study would help researchers understand the role of ultrasound as an energy source and acoustic streaming as one of the most important of ultrasound waves on intensifying gas-liquid mass transfer in a two-phase system and can be a breakthrough in the design procedure as no similar studies were found in the existing literature. Copyright © 2016. Published by Elsevier B.V.

  9. Global computing for bioinformatics.

    PubMed

    Loewe, Laurence

    2002-12-01

    Global computing, the collaboration of idle PCs via the Internet in a SETI@home style, emerges as a new way of massive parallel multiprocessing with potentially enormous CPU power. Its relations to the broader, fast-moving field of Grid computing are discussed without attempting a review of the latter. This review (i) includes a short table of milestones in global computing history, (ii) lists opportunities global computing offers for bioinformatics, (iii) describes the structure of problems well suited for such an approach, (iv) analyses the anatomy of successful projects and (v) points to existing software frameworks. Finally, an evaluation of the various costs shows that global computing indeed has merit, if the problem to be solved is already coded appropriately and a suitable global computing framework can be found. Then, either significant amounts of computing power can be recruited from the general public, or--if employed in an enterprise-wide Intranet for security reasons--idle desktop PCs can substitute for an expensive dedicated cluster.

  10. Power Profiles and In Vitro Optical Quality of Scleral Contact Lenses: Effect of the Aperture and Power.

    PubMed

    Domínguez-Vicent, Alberto; Esteve-Taboada, Jose Juan; Recchioni, Alberto; Brautaset, Rune

    2018-05-01

    To assess the power profile and in vitro optical quality of scleral contact lenses with different powers as a function of the optical aperture. The mini and semiscleral contact lenses (Procornea) were measured for five powers per design. The NIMO TR-1504 (Lambda-X) was used to assess the power profile and Zernike coefficients of each contact lens. Ten measurements per lens were taken at 3- and 6-mm apertures. Furthermore, the optical quality of each lens was described in Zernike coefficients, modulation transfer function, and point spread function (PSF). A convolution of each lens PSF with an eye-chart image was also computed. The optical power fluctuated less than 0.5 diopters (D) along the optical zone of each lens. However, the optical power obtained for some lenses did not match with its corresponding nominal one, the maximum difference being 0.5 D. In optical quality, small differences were obtained among all lenses within the same design. Although significant differences were obtained among lenses (P<0.05), these showed small impact in the image quality of each convolution. Insignificant power fluctuations were obtained along the optical zone measured for each scleral lens. Additionally, the optical quality of both lenses has showed to be independent of the lens power within the same aperture.

  11. The Computer Revolution.

    ERIC Educational Resources Information Center

    Berkeley, Edmund C.

    "The Computer Revolution", a part of the "Second Industrial Revolution", is examined with reference to the social consequences of computers. The subject is introduced in an opening section which discusses the revolution in the handling of information and the history, powers, uses, and working s of computers. A second section examines in detail the…

  12. Power and Energy Considerations at Forward Operating Bases (FOBs)

    DTIC Science & Technology

    2010-06-16

    systems • Anticipated additional plug loads by users – Personal Computers and Gaming Devices – Coffee Pots – Refrigerators – Lights – Personal Heaters...effort was made to account for the significant amount of equipment that consumes power not on the unit’s MTOE (printers, plotters, coffee pots, etc...50 Warfighters including billeting, kitchen, laundry, shower, latrines, and new wastewater treatment system Capability/impact: Compact, lightweight

  13. Circuit for Communication Over Power Lines

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Normal F.; Greer, Lawrence C., III; Nappier, Jennifer

    2011-01-01

    Many distributed systems share common sensors and instruments along with a common power line supplying current to the system. A communication technique and circuit has been developed that allows for the simple inclusion of an instrument, sensor, or actuator node within any system containing a common power bus. Wherever power is available, a node can be added, which can then draw power for itself, its associated sensors, and actuators from the power bus all while communicating with other nodes on the power bus. The technique modulates a DC power bus through capacitive coupling using on-off keying (OOK), and receives and demodulates the signal from the DC power bus through the same capacitive coupling. The circuit acts as serial modem for the physical power line communication. The circuit and technique can be made of commercially available components or included in an application specific integrated circuit (ASIC) design, which allows for the circuit to be included in current designs with additional circuitry or embedded into new designs. This device and technique moves computational, sensing, and actuation abilities closer to the source, and allows for the networking of multiple similar nodes to each other and to a central processor. This technique also allows for reconfigurable systems by adding or removing nodes at any time. It can do so using nothing more than the in situ power wiring of the system.

  14. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  15. QCDOC: A 10-teraflops scale computer for lattice QCD

    NASA Astrophysics Data System (ADS)

    Chen, D.; Christ, N. H.; Cristian, C.; Dong, Z.; Gara, A.; Garg, K.; Joo, B.; Kim, C.; Levkova, L.; Liao, X.; Mawhinney, R. D.; Ohta, S.; Wettig, T.

    2001-03-01

    The architecture of a new class of computers, optimized for lattice QCD calculations, is described. An individual node is based on a single integrated circuit containing a PowerPC 32-bit integer processor with a 1 Gflops 64-bit IEEE floating point unit, 4 Mbyte of memory, 8 Gbit/sec nearest-neighbor communications and additional control and diagnostic circuitry. The machine's name, QCDOC, derives from "QCD On a Chip".

  16. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  17. Expression Templates for Truncated Power Series

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Shasharina, Svetlana G.

    1997-05-01

    Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.

  18. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  19. Augmentation of the space station module power management and distribution breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Hall, David K.; Lollar, Louis F.

    1991-01-01

    The space station module power management and distribution (SSM/PMAD) breadboard models power distribution and management, including scheduling, load prioritization, and a fault detection, identification, and recovery (FDIR) system within a Space Station Freedom habitation or laboratory module. This 120 VDC system is capable of distributing up to 30 kW of power among more than 25 loads. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level consists of fast, simple (from a computing standpoint) switchgear that is capable of quickly safing the system. At the next level are local load center processors, (LLP's) which execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. Above the LLP's are three cooperating artificial intelligence (AI) systems which manage load prioritizations, load scheduling, load shedding, and fault recovery and management. Recent upgrades to hardware and modifications to software at both the LLP and AI system levels promise a drastic increase in speed, a significant increase in functionality and reliability, and potential for further examination of advanced automation techniques. The background, SSM/PMAD, interface to the Lewis Research Center test bed, the large autonomous spacecraft electrical power system, and future plans are discussed.

  20. The potential benefits of photonics in the computing platform

    NASA Astrophysics Data System (ADS)

    Bautista, Jerry

    2005-03-01

    The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.

  1. COMMIX-PPC: A three-dimensional transient multicomponent computer program for analyzing performance of power plant condensers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chien, T.H.; Domanus, H.M.; Sha, W.T.

    1993-02-01

    The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added featuremore » is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User's Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.« less

  2. Plasmonic computing of spatial differentiation

    NASA Astrophysics Data System (ADS)

    Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui

    2017-05-01

    Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.

  3. Quantum phases with differing computational power.

    PubMed

    Cui, Jian; Gu, Mile; Kwek, Leong Chuan; Santos, Marcelo França; Fan, Heng; Vedral, Vlatko

    2012-05-01

    The observation that concepts from quantum information has generated many alternative indicators of quantum phase transitions hints that quantum phase transitions possess operational significance with respect to the processing of quantum information. Yet, studies on whether such transitions lead to quantum phases that differ in their capacity to process information remain limited. Here we show that there exist quantum phase transitions that cause a distinct qualitative change in our ability to simulate certain quantum systems under perturbation of an external field by local operations and classical communication. In particular, we show that in certain quantum phases of the XY model, adiabatic perturbations of the external magnetic field can be simulated by local spin operations, whereas the resulting effect within other phases results in coherent non-local interactions. We discuss the potential implications to adiabatic quantum computation, where a computational advantage exists only when adiabatic perturbation results in coherent multi-body interactions.

  4. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE PAGES

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...

    2018-03-22

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  5. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  6. Computing an operating parameter of a unified power flow controller

    DOEpatents

    Wilson, David G.; Robinett, III, Rush D.

    2017-12-26

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  7. Computing an operating parameter of a unified power flow controller

    DOEpatents

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  8. Fast Computation and Assessment Methods in Power System Analysis

    NASA Astrophysics Data System (ADS)

    Nagata, Masaki

    Power system analysis is essential for efficient and reliable power system operation and control. Recently, online security assessment system has become of importance, as more efficient use of power networks is eagerly required. In this article, fast power system analysis techniques such as contingency screening, parallel processing and intelligent systems application are briefly surveyed from the view point of their application to online dynamic security assessment.

  9. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    NASA Astrophysics Data System (ADS)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  10. Spacecraft computer technology at Southwest Research Institute

    NASA Technical Reports Server (NTRS)

    Shirley, D. J.

    1993-01-01

    Southwest Research Institute (SwRI) has developed and delivered spacecraft computers for a number of different near-Earth-orbit spacecraft including shuttle experiments and SDIO free-flyer experiments. We describe the evolution of the basic SwRI spacecraft computer design from those weighing in at 20 to 25 lb and using 20 to 30 W to newer models weighing less than 5 lb and using only about 5 W, yet delivering twice the processing throughput. Because of their reduced size, weight, and power, these newer designs are especially applicable to planetary instrument requirements. The basis of our design evolution has been the availability of more powerful processor chip sets and the development of higher density packaging technology, coupled with more aggressive design strategies in incorporating high-density FPGA technology and use of high-density memory chips. In addition to reductions in size, weight, and power, the newer designs also address the necessity of survival in the harsh radiation environment of space. Spurred by participation in such programs as MSTI, LACE, RME, Delta 181, Delta Star, and RADARSAT, our designs have evolved in response to program demands to be small, low-powered units, radiation tolerant enough to be suitable for both Earth-orbit microsats and for planetary instruments. Present designs already include MIL-STD-1750 and Multi-Chip Module (MCM) technology with near-term plans to include RISC processors and higher-density MCM's. Long term plans include development of whole-core processors on one or two MCM's.

  11. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  12. Constructing probabilistic scenarios for wide-area solar power generation

    DOE PAGES

    Woodruff, David L.; Deride, Julio; Staid, Andrea; ...

    2017-12-22

    Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less

  13. Constructing probabilistic scenarios for wide-area solar power generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodruff, David L.; Deride, Julio; Staid, Andrea

    Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions.more » Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.« less

  14. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS § 5.21...

  15. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS § 5.21...

  16. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  17. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  18. Additive Manufacturing/Diagnostics via the High Frequency Induction Heating of Metal Powders: The Determination of the Power Transfer Factor for Fine Metallic Spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios, Orlando; Radhakrishnan, Balasubramaniam; Caravias, George

    2015-03-11

    Grid Logic Inc. is developing a method for sintering and melting fine metallic powders for additive manufacturing using spatially-compact, high-frequency magnetic fields called Micro-Induction Sintering (MIS). One of the challenges in advancing MIS technology for additive manufacturing is in understanding the power transfer to the particles in a powder bed. This knowledge is important to achieving efficient power transfer, control, and selective particle heating during the MIS process needed for commercialization of the technology. The project s work provided a rigorous physics-based model for induction heating of fine spherical particles as a function of frequency and particle size. This simulationmore » improved upon Grid Logic s earlier models and provides guidance that will make the MIS technology more effective. The project model will be incorporated into Grid Logic s power control circuit of the MIS 3D printer product and its diagnostics technology to optimize the sintering process for part quality and energy efficiency.« less

  19. GPS synchronized power system phase angle measurements

    NASA Astrophysics Data System (ADS)

    Wilson, Robert E.; Sterlina, Patrick S.

    1994-09-01

    This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.

  20. Theoretical Considerations for Improving the Pulse Power of a Battery through the Addition of a Second Electrochemically Active Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knehr, K. W.; West, Alan C.

    Here, porous electrode theory is used to conduct case studies for when the addition of a second electrochemically active material can improve the pulse-power performance of an electrode. Case studies are conducted for the positive electrode of a sodium metal-halide battery and the graphite negative electrode of a lithium “rocking chair” battery. The replacement of a fraction of the nickel chloride capacity with iron chloride in a sodium metal-halide electrode and the replacement of a fraction of the graphite capacity with carbon black in a lithium-ion negative electrode were both predicted to increase the maximum pulse power by up tomore » 40%. In general, whether or not a second electrochemically active material increases the pulse power depends on the relative importance of ohmic-to-charge transfer resistances within the porous structure, the capacity fraction of the second electrochemically active material, and the kinetic and thermodynamic parameters of the two active materials.« less

  1. Theoretical Considerations for Improving the Pulse Power of a Battery through the Addition of a Second Electrochemically Active Material

    DOE PAGES

    Knehr, K. W.; West, Alan C.

    2016-05-26

    Here, porous electrode theory is used to conduct case studies for when the addition of a second electrochemically active material can improve the pulse-power performance of an electrode. Case studies are conducted for the positive electrode of a sodium metal-halide battery and the graphite negative electrode of a lithium “rocking chair” battery. The replacement of a fraction of the nickel chloride capacity with iron chloride in a sodium metal-halide electrode and the replacement of a fraction of the graphite capacity with carbon black in a lithium-ion negative electrode were both predicted to increase the maximum pulse power by up tomore » 40%. In general, whether or not a second electrochemically active material increases the pulse power depends on the relative importance of ohmic-to-charge transfer resistances within the porous structure, the capacity fraction of the second electrochemically active material, and the kinetic and thermodynamic parameters of the two active materials.« less

  2. Avoiding Split Attention in Computer-Based Testing: Is Neglecting Additional Information Facilitative?

    ERIC Educational Resources Information Center

    Jarodzka, Halszka; Janssen, Noortje; Kirschner, Paul A.; Erkens, Gijsbert

    2015-01-01

    This study investigated whether design guidelines for computer-based learning can be applied to computer-based testing (CBT). Twenty-two students completed a CBT exam with half of the questions presented in a split-screen format that was analogous to the original paper-and-pencil version and half in an integrated format. Results show that students…

  3. Power and the Power Simulation: Then and Now

    ERIC Educational Resources Information Center

    Bolman, Lee; Deal, Terrence E.

    2017-01-01

    Lee Bolman and Terrence Deal think of how much has changed since the era of innocence when they first published "A Simple But Powerful Power Simulation"--before the advent of cell phones, personal computers, the Internet, e-mail, Facebook, and Twitter. So much has changed, and yet the fundamentals of human behavior, social interaction,…

  4. High-power AlGaAs channeled substrate planar diode lasers for spaceborne communications

    NASA Technical Reports Server (NTRS)

    Connolly, J. C.; Goldstein, B.; Pultz, G. N.; Slavin, S. E.; Carlin, D. B.; Ettenberg, M.

    1988-01-01

    A high power channeled substrate planar AlGaAs diode laser with an emission wavelength of 8600 to 8800 A was developed. The optoelectronic behavior (power current, single spatial and spectral behavior, far field characteristics, modulation, and astigmatism properties) and results of computer modeling studies on the performance of the laser are discussed. Lifetest data on these devices at high output power levels is also included. In addition, a new type of channeled substrate planar laser utilizing a Bragg grating to stabilize the longitudinal mode was demonstrated. The fabrication procedures and optoelectronic properties of this new diode laser are described.

  5. Future in biomolecular computation

    NASA Astrophysics Data System (ADS)

    Wimmer, E.

    1988-01-01

    Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style

  6. Small Universal Bacteria and Plasmid Computing Systems.

    PubMed

    Wang, Xun; Zheng, Pan; Ma, Tongmao; Song, Tao

    2018-05-29

    Bacterial computing is a known candidate in natural computing, the aim being to construct "bacterial computers" for solving complex problems. In this paper, a new kind of bacterial computing system, named the bacteria and plasmid computing system (BP system), is proposed. We investigate the computational power of BP systems with finite numbers of bacteria and plasmids. Specifically, it is obtained in a constructive way that a BP system with 2 bacteria and 34 plasmids is Turing universal. The results provide a theoretical cornerstone to construct powerful bacterial computers and demonstrate a concept of paradigms using a "reasonable" number of bacteria and plasmids for such devices.

  7. Simulation of Power Collection Dynamics for Simply Supported Power Rail

    DOT National Transportation Integrated Search

    1972-11-01

    The mathematical model of a sprung mass moving along a simply supported beam is used to analyze the dynamics of a power-collection system. A computer simulation of one-dimensional motion is used to demonstrate the phenomenon of collector-power rail i...

  8. MHD Advanced Power Train Phase I, Final Report, Volume 7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. R. Jones

    This appendix provides additional data in support of the MHD/Steam Power Plant Analyses reported in report Volume 5. The data is in the form of 3PA/SUMARY computer code printouts. The order of presentation in all four cases is as follows: (1) Overall Performance; (2) Component/Subsystem Information; (3) Plant Cost Accounts Summary; and (4) Plant Costing Details and Cost of Electricity.

  9. Teach Graphic Design Basics with PowerPoint

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Spotts, Thomas H.

    2007-01-01

    While PowerPoint is generally regarded as simply software for creating slide presentations, it includes often overlooked--but powerful--drawing tools. Because it is part of the Microsoft Office package, PowerPoint comes preloaded on many computers and thus is already available in many classrooms. Since most computers are not preloaded with good…

  10. Programmable computing with a single magnetoresistive element

    NASA Astrophysics Data System (ADS)

    Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.

    2003-10-01

    The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.

  11. Generative Representations for Computer-Automated Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2006-01-01

    With the increasing computational power of computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D objects.

  12. Architecture and data processing alternatives for the TSE computer. Volume 2: Extraction of topological information from an image by the Tse computer

    NASA Technical Reports Server (NTRS)

    Jones, J. R.; Bodenheimer, R. E.

    1976-01-01

    A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.

  13. Power allocation and range performance considerations for a dual-frequency EBPSK/MPPSK system

    NASA Astrophysics Data System (ADS)

    Yao, Yu; Wu, Lenan; Zhao, Junhui

    2017-12-01

    Extended binary phase shift keying/M-ary position phase shift keying (EBPSK/MPPSK)-MODEM provides radar and communication functions on a single hardware platform with a single waveform. However, its range estimation accuracy is worse than continuous-wave (CW) radar because of the imbalance of power in two carrier frequencies. In this article, the power allocation method for dual-frequency EBPSK/MPPSK modulated systems is presented. The power of two signal transmitters is adequately allocated to ensure that the power in two carrier frequencies is equal. The power allocation ratios for two types of modulation systems are obtained. Moreover, considerations regarding the range of operation of the dual-frequency system are analysed. In addition to theoretical considerations, computer simulations are provided to illustrate the performance.

  14. Analysis of large power systems

    NASA Technical Reports Server (NTRS)

    Dommel, H. W.

    1975-01-01

    Computer-oriented power systems analysis procedures in the electric utilities are surveyed. The growth of electric power systems is discussed along with the solution of sparse network equations, power flow, and stability studies.

  15. US power plant sites at risk of future sea-level rise

    NASA Astrophysics Data System (ADS)

    Bierkandt, R.; Auffhammer, M.; Levermann, A.

    2015-12-01

    Unmitigated greenhouse gas emissions may increase global mean sea-level by about 1 meter during this century. Such elevation of the mean sea-level enhances the risk of flooding of coastal areas. We compute the power capacity that is currently out-of-reach of a 100-year coastal flooding but will be exposed to such a flood by the end of the century for different US states, if no adaptation measures are taken. The additional exposed capacity varies strongly among states. For Delaware it is 80% of the mean generated power load. For New York this number is 63% and for Florida 43%. The capacity that needs additional protection compared to today increases by more than 250% for Texas, 90% for Florida and 70% for New York. Current development in power plant building points towards a reduced future exposure to sea-level rise: proposed and planned power plants are less exposed than those which are currently operating. However, power plants that have been retired or canceled were less exposed than those operating at present. If sea-level rise is properly accounted for in future planning, an adaptation to sea-level rise may be costly but possible.

  16. The Next Computer Revolution.

    ERIC Educational Resources Information Center

    Peled, Abraham

    1987-01-01

    Discusses some of the future trends in the use of the computer in our society, suggesting that computing is now entering a new phase in which it will grow exponentially more powerful, flexible, and sophisticated in the next decade. Describes some of the latest breakthroughs in computer hardware and software technology. (TW)

  17. Power management system

    DOEpatents

    Algrain, Marcelo C.; Johnson, Kris W.; Akasam, Sivaprasad; Hoff, Brian D.

    2007-10-02

    A method of managing power resources for an electrical system of a vehicle may include identifying enabled power sources from among a plurality of power sources in electrical communication with the electrical system and calculating a threshold power value for the enabled power sources. A total power load placed on the electrical system by one or more power consumers may be measured. If the total power load exceeds the threshold power value, then a determination may be made as to whether one or more additional power sources is available from among the plurality of power sources. At least one of the one or more additional power sources may be enabled, if available.

  18. Reducing Communication in Algebraic Multigrid Using Additive Variants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for goodmore » performance on future exascale architectures.« less

  19. Reducing Communication in Algebraic Multigrid Using Additive Variants

    DOE PAGES

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    2014-02-12

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for goodmore » performance on future exascale architectures.« less

  20. Characterization of real-time computers

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Krishna, C. M.

    1984-01-01

    A real-time system consists of a computer controller and controlled processes. Despite the synergistic relationship between these two components, they have been traditionally designed and analyzed independently of and separately from each other; namely, computer controllers by computer scientists/engineers and controlled processes by control scientists. As a remedy for this problem, in this report real-time computers are characterized by performance measures based on computer controller response time that are: (1) congruent to the real-time applications, (2) able to offer an objective comparison of rival computer systems, and (3) experimentally measurable/determinable. These measures, unlike others, provide the real-time computer controller with a natural link to controlled processes. In order to demonstrate their utility and power, these measures are first determined for example controlled processes on the basis of control performance functionals. They are then used for two important real-time multiprocessor design applications - the number-power tradeoff and fault-masking and synchronization.

  1. Cloud Computing with iPlant Atmosphere.

    PubMed

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  2. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  3. Two-spectral Yang-Baxter operators in topological quantum computation

    NASA Astrophysics Data System (ADS)

    Sanchez, William F.

    2011-05-01

    One of the current trends in quantum computing is the application of algebraic topological methods in the design of new algorithms and quantum computers, giving rise to topological quantum computing. One of the tools used in it is the Yang-Baxter equation whose solutions are interpreted as universal quantum gates. Lately, more general Yang-Baxter equations have been investigated, making progress as two-spectral equations and Yang-Baxter systems. This paper intends to apply these new findings to the field of topological quantum computation, more specifically, the proposition of the two-spectral Yang-Baxter operators as universal quantum gates for 2 qubits and 2 qutrits systems, obtaining 4x4 and 9x9 matrices respectively, and further elaboration of the corresponding Hamiltonian by the use of computer algebra software Mathematica® and its Qucalc package. In addition, possible physical systems to which the Yang-Baxter operators obtained can be applied are considered. In the present work it is demonstrated the utility of the Yang-Baxter equation to generate universal quantum gates and the power of computer algebra to design them; it is expected that these mathematical studies contribute to the further development of quantum computers

  4. Hybrid Voronoi diagrams, their computation and reduction for applications in computational biochemistry.

    PubMed

    Manak, Martin; Zemek, Michal; Szkandera, Jakub; Kolingerova, Ivana; Papaleo, Elena; Lambrughi, Matteo

    2017-06-01

    Geometric models of molecular structures are often described as a set of balls, where balls represent individual atoms. The ability to describe and explore the empty space among these balls is important, e.g., in the analysis of the interaction of enzymes with substrates, ligands and solvent molecules. Voronoi diagrams from the field of computational geometry are often used here, because they provide a mathematical description of how the whole space can be divided into regions assigned to individual atoms. This paper introduces a combination of two different types of Voronoi diagrams into a new hybrid Voronoi diagram - one part of this diagram belongs to the additively weighted (aw-Voronoi) diagram and the other to the power diagram. The boundary between them is controlled by a user-defined constant (the probe radius). Both parts are computed by different algorithms, which are already known. The reduced aw-Voronoi diagram is then obtained by removing the power diagram part from the hybrid diagram. Reduced aw-Voronoi diagrams are perfectly tailored for the analysis of dynamic molecular structures, their computation is faster and storage requirements are lower than in the case of complete aw-Voronoi diagrams. Here, we showed their application to key proteins in cancer research such as p53 and ARID proteins as case study. We identified a biologically relevant cavity in p53 structural ensembles generated by molecular dynamics simulations and analyzed its accessibility, attesting the potential of our approach. This method is relevant for cancer research since it permits to depict a dynamical view of cavities and pockets in proteins that could be affected by mutations in the disease. Our approach opens novel prospects for the study of cancer-related proteins by molecular simulations and the identification of novel targets for drug design. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Using 3D infrared imaging to calibrate and refine computational fluid dynamic modeling for large computer and data centers

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2011-05-01

    Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.

  6. Performance evaluation of power transmission coils for powering endoscopic wireless capsules.

    PubMed

    Basar, Md Rubel; Ahmad, Mohd Yazed; Cho, Jongman; Ibrahim, Fatimah

    2015-01-01

    This paper presents an analysis of H-field generated by a simple solenoid, pair of solenoids, pair of double-layer solenoids, segmented-solenoid, and Helmholtz power transmission coils (PTCs) to power an endoscopic wireless capsule (WC). The H-fields were computed using finite element analysis based on partial differential equations. Three parameters were considered in the analysis: i) the maximum level of H-field (Hmax) to which the patient's body would be exposed, ii) the minimum level of H-field (Hmin) effective for power transmission, and iii) uniformity of H-field. We validated our analysis by comparing the computed data with data measured from a fabricated Helmholtz PTC. This analysis disclosed that at the same excitation power, all the PTCs are able to transfer same amount of minimum usable power since they generated almost equal value of Hmin. The level of electromagnetic exposure and power transfer stability across all the PTCs would vary significantly which is mainly due to the different level of Hmax and H-field uniformity. The segmented solenoid PTC would cause the lowest exposure and this PTC can transfer the maximum amount of power. The Helmholtz PTC would be able to transfer the most stable power with a moderate level of exposure.

  7. Rankine engine solar power generation. I - Performance and economic analysis

    NASA Technical Reports Server (NTRS)

    Gossler, A. A.; Orrock, J. E.

    1981-01-01

    Results of a computer simulation of the performance of a solar flat plate collector powered electrical generation system are presented. The simulation was configured to include locations in New Mexico, North Dakota, Tennessee, and Massachusetts, and considered a water-based heat-transfer fluid collector system with storage. The collectors also powered a Rankine-cycle boiler filled with a low temperature working fluid. The generator was considered to be run only when excess solar heat and full storage would otherwise require heat purging through the collectors. All power was directed into the utility grid. The solar powered generator unit addition was found to be dependent on site location and collector area, and reduced the effective solar cost with collector areas greater than 400-670 sq m. The sites were economically ranked, best to worst: New Mexico, North Dakota, Massachusetts, and Tennessee.

  8. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    NASA Astrophysics Data System (ADS)

    Shaat, Musbah; Bader, Faouzi

    2010-12-01

    Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  9. Efficiency of a thermodynamic motor at maximum power

    NASA Astrophysics Data System (ADS)

    Moreau, M.; Gaveau, B.; Schulman, L. S.

    2012-02-01

    Several recent theories address the efficiency of a macroscopic thermodynamic motor at maximum power and question the so-called Curzon-Ahlborn (CA) efficiency. Considering the entropy exchanges and productions in an n-sources motor, we study the maximization of its power and show that the controversies are partly due to some imprecision in the maximization variables. When power is maximized with respect to the system temperatures, these temperatures are proportional to the square root of the corresponding source temperatures, which leads to the CA formula for a bithermal motor. On the other hand, when power is maximized with respect to the transition durations, the Carnot efficiency of a bithermal motor admits the CA efficiency as a lower bound, which is attained if the duration of the adiabatic transitions can be neglected. Additionally, we compute the energetic efficiency, or “sustainable efficiency,” which can be defined for n sources, and we show that it has no other universal upper bound than 1, but that in certain situations, which are favorable for power production, it does not exceed ½.

  10. Efficiency of a thermodynamic motor at maximum power.

    PubMed

    Moreau, M; Gaveau, B; Schulman, L S

    2012-02-01

    Several recent theories address the efficiency of a macroscopic thermodynamic motor at maximum power and question the so-called Curzon-Ahlborn (CA) efficiency. Considering the entropy exchanges and productions in an n-sources motor, we study the maximization of its power and show that the controversies are partly due to some imprecision in the maximization variables. When power is maximized with respect to the system temperatures, these temperatures are proportional to the square root of the corresponding source temperatures, which leads to the CA formula for a bithermal motor. On the other hand, when power is maximized with respect to the transition durations, the Carnot efficiency of a bithermal motor admits the CA efficiency as a lower bound, which is attained if the duration of the adiabatic transitions can be neglected. Additionally, we compute the energetic efficiency, or "sustainable efficiency," which can be defined for n sources, and we show that it has no other universal upper bound than 1, but that in certain situations, which are favorable for power production, it does not exceed ½. © 2012 American Physical Society

  11. Engineering incremental resistive switching in TaOx based memristors for brain-inspired computing

    NASA Astrophysics Data System (ADS)

    Wang, Zongwei; Yin, Minghui; Zhang, Teng; Cai, Yimao; Wang, Yangyuan; Yang, Yuchao; Huang, Ru

    2016-07-01

    Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament growth (dissolution) dynamics with time, therefore significantly improving the conductance modulation linearity that is desirable in neuromorphic systems. In addition, the introduction of the DLL also served to reduce the power consumption of the memristor, and important synaptic learning rules in biological brains such as spike timing dependent plasticity were successfully implemented using these optimized devices. This study could provide general implications for continued optimizations of memristor performance for neuromorphic applications, by carefully tuning the dynamics involved in filament growth and dissolution.Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament

  12. Utilizing gamma band to improve mental task based brain-computer interface design.

    PubMed

    Palaniappan, Ramaswamy

    2006-09-01

    A common method for designing brain-computer Interface (BCI) is to use electroencephalogram (EEG) signals extracted during mental tasks. In these BCI designs, features from EEG such as power and asymmetry ratios from delta, theta, alpha, and beta bands have been used in classifying different mental tasks. In this paper, the performance of the mental task based BCI design is improved by using spectral power and asymmetry ratios from gamma (24-37 Hz) band in addition to the lower frequency bands. In the experimental study, EEG signals extracted during five mental tasks from four subjects were used. Elman neural network (ENN) trained by the resilient backpropagation algorithm was used to classify the power and asymmetry ratios from EEG into different combinations of two mental tasks. The results indicated that ((1) the classification performance and training time of the BCI design were improved through the use of additional gamma band features; (2) classification performances were nearly invariant to the number of ENN hidden units or feature extraction method.

  13. The Effect of Emphasizing Mathematical Structure in the Acquisition of Whole Number Computation Skills (Addition and Subtraction) By Seven- and Eight-Year Olds: A Clinical Investigation.

    ERIC Educational Resources Information Center

    Uprichard, A. Edward; Collura, Carolyn

    This investigation sought to determine the effect of emphasizing mathematical structure in the acquisition of computational skills by seven- and eight-year-olds. The meaningful development-of-structure approach emphasized closure, commutativity, associativity, and the identity element of addition; the inverse relationship between addition and…

  14. Nano-Magnets and Additive Manufacturing for Electric Motors

    NASA Technical Reports Server (NTRS)

    Misra, Ajay K.

    2014-01-01

    High power density is required for application of electric motors in hybrid electric propulsion. Potential path to achieve high power density in electric motors include advanced materials, lightweight thermal management, lightweight structural concepts, high power density power electronics, and advanced manufacturing. This presentation will focus on two key technologies for achieving high power density, advanced magnets and additive manufacturing. The maximum energy product in current magnets is reaching their theoretical limits as a result of material and process improvements. Future improvements in the maximum energy product for magnets can be achieved through development of nanocomposite magnets combining the hard magnetic phase and soft magnetic phase at the nanoscale level. The presentation will provide an overview of the current state of development for nanocomposite magnets and the future path for doubling the maximum energy product. The other part of the presentation will focus on the role of additive manufacturing in fabrication of high power density electric motors. The presentation will highlight the potential opportunities for applying additive manufacturing to fabricate electric motors.

  15. RighTime: A real time clock correcting program for MS-DOS-based computer systems

    NASA Technical Reports Server (NTRS)

    Becker, G. Thomas

    1993-01-01

    A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.

  16. 20 CFR 404.278 - Additional cost-of-living increase.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 404.278 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Cost-Of-Living Increases § 404.278...) Measuring period for the additional increase—(1) Beginning. To compute the additional increase, we begin...

  17. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  18. Adsorption of molecular additive onto lead halide perovskite surfaces: A computational study on Lewis base thiophene additive passivation

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yu, Fengxi; Chen, Lihong; Li, Jingfa

    2018-06-01

    Organic additives, such as the Lewis base thiophene, have been successfully applied to passivate halide perovskite surfaces, improving the stability and properties of perovskite devices based on CH3NH3PbI3. Yet, the detailed nanostructure of the perovskite surface passivated by additives and the mechanisms of such passivation are not well understood. This study presents a nanoscopic view on the interfacial structure of an additive/perovskite interface, consisting of a Lewis base thiophene molecular additive and a lead halide perovskite surface substrate, providing insights on the mechanisms that molecular additives can passivate the halide perovskite surfaces and enhance the perovskite-based device performance. Molecular dynamics study on the interactions between water molecules and the perovskite surfaces passivated by the investigated additive reveal the effectiveness of employing the molecular additives to improve the stability of the halide perovskite materials. The additive/perovskite surface system is further probed via molecular engineering the perovskite surfaces. This study reveals the nanoscopic structure-property relationships of the halide perovskite surface passivated by molecular additives, which helps the fundamental understanding of the surface/interface engineering strategies for the development of halide perovskite based devices.

  19. Good Health: The Power of Power

    ERIC Educational Resources Information Center

    Corbin, Charles B.; Janz, Kathleen F.; Baptista, Fátima

    2017-01-01

    Power has long been considered to be a skill-related fitness component. However, based on recent evidence, a strong case can be made for the classification of power as a health-related fitness component. Additionally, the evidence indicates that performing physical activities that build power is associated with the healthy development of bones…

  20. Mechanism and enantioselectivity in palladium-catalyzed conjugate addition of arylboronic acids to β-substituted cyclic enones: insights from computation and experiment.

    PubMed

    Holder, Jeffrey C; Zou, Lufeng; Marziale, Alexander N; Liu, Peng; Lan, Yu; Gatti, Michele; Kikushima, Kotaro; Houk, K N; Stoltz, Brian M

    2013-10-09

    Enantioselective conjugate additions of arylboronic acids to β-substituted cyclic enones have been previously reported from our laboratories. Air- and moisture-tolerant conditions were achieved with a catalyst derived in situ from palladium(II) trifluoroacetate and the chiral ligand (S)-t-BuPyOx. We now report a combined experimental and computational investigation on the mechanism, the nature of the active catalyst, the origins of the enantioselectivity, and the stereoelectronic effects of the ligand and the substrates of this transformation. Enantioselectivity is controlled primarily by steric repulsions between the t-Bu group of the chiral ligand and the α-methylene hydrogens of the enone substrate in the enantiodetermining carbopalladation step. Computations indicate that the reaction occurs via formation of a cationic arylpalladium(II) species, and subsequent carbopalladation of the enone olefin forms the key carbon-carbon bond. Studies of nonlinear effects and stoichiometric and catalytic reactions of isolated (PyOx)Pd(Ph)I complexes show that a monomeric arylpalladium-ligand complex is the active species in the selectivity-determining step. The addition of water and ammonium hexafluorophosphate synergistically increases the rate of the reaction, corroborating the hypothesis that a cationic palladium species is involved in the reaction pathway. These additives also allow the reaction to be performed at 40 °C and facilitate an expanded substrate scope.

  1. Mechanism and Enantioselectivity in Palladium-Catalyzed Conjugate Addition of Arylboronic Acids to β-Substituted Cyclic Enones: Insights from Computation and Experiment

    PubMed Central

    Holder, Jeffrey C.; Zou, Lufeng; Marziale, Alexander N.; Liu, Peng; Lan, Yu; Gatti, Michele; Kikushima, Kotaro; Houk, K. N.; Stoltz, Brian M.

    2013-01-01

    Enantioselective conjugate additions of arylboronic acids to β-substituted cyclic enones have been reported previously from our laboratories. Air and moisture tolerant conditions were achieved with a catalyst derived in situ from palladium(II) trifluoroacetate and the chiral ligand (S)-t-BuPyOx. We now report a combined experimental and computational investigation on the mechanism, the nature of the active catalyst, the origins of the enantioselectivity, and the stereoelectronic effects of the ligand and the substrates of this transformation. Enantioselectivity is controlled primarily by steric repulsions between the t-Bu group of the chiral ligand and the α-methylene hydrogens of the enone substrate in the enantiodetermining carbopalladation step. Computations indicate that the reaction occurs via formation of a cationic arylpalladium(II) species, and subsequent carbopalladation of the enone olefin forms the key carbon-carbon bond. Studies of non-linear effects and stoichiometric and catalytic reactions of isolated (PyOx)Pd(Ph)I complexes show that a monomeric arylpalladium-ligand complex is the active species in the selectivity-determining step. The addition of water and ammonium hexafluorophosphate synergistically increases the rate of the reaction, corroborating the hypothesis that a cationic palladium species is involved in the reaction pathway. These additives also allow the reaction to be performed at 40 °C and facilitate an expanded substrate scope. PMID:24028424

  2. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Yu, Hua

    2018-05-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  3. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Yu, Hua

    2018-04-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  4. Durability Testing of Additively Manufactured High Power Microwave Structures

    DTIC Science & Technology

    2017-10-29

    the aluminum anode, generating microwave powers in excess of 150 MW. After 100 shots on each structure, neither anode showed any signs of...with an average instantaneous peak total efficiency of 27% ± 10%. After 100 shots on each structure, neither anode showed any signs of...uniform axial magnetic field, which was varied on a per- shot basis from 0.13 to 0.31 T. A #304 stainless steel vacuum chamber housed the magnetron

  5. Feasibility of an ultra-low power digital signal processor platform as a basis for a fully implantable brain-computer interface system.

    PubMed

    Wang, Po T; Gandasetiawan, Keulanna; McCrimmon, Colin M; Karimi-Bidhendi, Alireza; Liu, Charles Y; Heydari, Payam; Nenadic, Zoran; Do, An H

    2016-08-01

    A fully implantable brain-computer interface (BCI) can be a practical tool to restore independence to those affected by spinal cord injury. We envision that such a BCI system will invasively acquire brain signals (e.g. electrocorticogram) and translate them into control commands for external prostheses. The feasibility of such a system was tested by implementing its benchtop analogue, centered around a commercial, ultra-low power (ULP) digital signal processor (DSP, TMS320C5517, Texas Instruments). A suite of signal processing and BCI algorithms, including (de)multiplexing, Fast Fourier Transform, power spectral density, principal component analysis, linear discriminant analysis, Bayes rule, and finite state machine was implemented and tested in the DSP. The system's signal acquisition fidelity was tested and characterized by acquiring harmonic signals from a function generator. In addition, the BCI decoding performance was tested, first with signals from a function generator, and subsequently using human electroencephalogram (EEG) during eyes opening and closing task. On average, the system spent 322 ms to process and analyze 2 s of data. Crosstalk (<;-65 dB) and harmonic distortion (~1%) were minimal. Timing jitter averaged 49 μs per 1000 ms. The online BCI decoding accuracies were 100% for both function generator and EEG data. These results show that a complex BCI algorithm can be executed on an ULP DSP without compromising performance. This suggests that the proposed hardware platform may be used as a basis for future, fully implantable BCI systems.

  6. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  7. Solid-state Isotopic Power Source for Computer Memory Chips

    NASA Technical Reports Server (NTRS)

    Brown, Paul M.

    1993-01-01

    Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.

  8. Modular and Reusable Power System Design for the BRRISON Balloon Telescope

    NASA Astrophysics Data System (ADS)

    Truesdale, Nicholas A.

    High altitude balloons are emerging as low-cost alternatives to orbital satellites in the field of telescopic observation. The near-space environment of balloons allows optics to perform near their diffraction limit. In practice, this implies that a telescope similar to the Hubble Space Telescope could be flown for a cost of tens of millions as opposed to billions. While highly feasible, the design of a balloon telescope to rival Hubble is limited by funding. Until a prototype is proven and more support for balloon science is gained, projects remain limited in both hardware costs and man hours. Thus, to effectively create and support balloon payloads, engineering designs must be efficient, modular, and if possible reusable. This thesis focuses specifically on a modular power system design for the BRRISON comet-observing balloon telescope. Time- and cost-saving techniques are developed that can be used for future missions. A modular design process is achieved through the development of individual circuit elements that span a wide range of capabilities. Circuits for power conversion, switching and sensing are designed to be combined in any configuration. These include DC-DC regulators, MOSFET drivers for switching, isolated switches, current sensors and voltage sensing ADCs. Emphasis is also given to commercially available hardware. Pre-fabricated DC-DC converters and an Arduino microcontroller simplify the design process and offer proven, cost-effective performance. The design of the BRRISON power system is developed from these low-level circuits elements. A board for main power distribution supports the majority of flight electronics, and is extensible to additional hardware in future applications. An ATX computer power supply is developed, allowing the use of a commercial ATX motherboard as the flight computer. The addition of new capabilities is explored in the form of a heater control board. Finally, the power system as a whole is described, and its overall

  9. Cyber-workstation for computational neuroscience.

    PubMed

    Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C

    2010-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.

  10. Cyber-Workstation for Computational Neuroscience

    PubMed Central

    DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.

    2009-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436

  11. Computer aided drug design

    NASA Astrophysics Data System (ADS)

    Jain, A.

    2017-08-01

    Computer based method can help in discovery of leads and can potentially eliminate chemical synthesis and screening of many irrelevant compounds, and in this way, it save time as well as cost. Molecular modeling systems are powerful tools for building, visualizing, analyzing and storing models of complex molecular structure that can help to interpretate structure activity relationship. The use of various techniques of molecular mechanics and dynamics and software in Computer aided drug design along with statistics analysis is powerful tool for the medicinal chemistry to synthesis therapeutic and effective drugs with minimum side effect.

  12. Complexity Bounds for Quantum Computation

    DTIC Science & Technology

    2007-06-22

    Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second

  13. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee, may, by letter, require the applicant to submit additional information as is needed for analysis of an... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Additional information...

  14. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee, may, by letter, require the applicant to submit additional information as is needed for analysis of an... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional information...

  15. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee, may, by letter, require the applicant to submit additional information as is needed for analysis of an... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Additional information...

  16. Towards zero-power ICT.

    PubMed

    Gammaitoni, Luca; Chiuchiú, D; Madami, M; Carlotti, G

    2015-06-05

    Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.

  17. Towards zero-power ICT

    NASA Astrophysics Data System (ADS)

    Gammaitoni, Luca; Chiuchiú, D.; Madami, M.; Carlotti, G.

    2015-06-01

    Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.

  18. Ethical Responsibility Key to Computer Security.

    ERIC Educational Resources Information Center

    Lynn, M. Stuart

    1989-01-01

    The pervasiveness of powerful computers and computer networks has raised the specter of new forms of abuse and of concomitant ethical issues. Blurred boundaries, hackers, the Computer Worm, ethical issues, and implications for academic institutions are discussed. (MLW)

  19. Control of wind turbine generators connected to power systems

    NASA Technical Reports Server (NTRS)

    Hwang, H. H.; Mozeico, H. V.; Gilbert, L. J.

    1978-01-01

    A unique simulation model based on a Mode-O wind turbine is developed for simulating both speed and power control. An analytical representation for a wind turbine that employs blade pitch angle feedback control is presented, and a mathematical model is formulated. For Mode-O serving as a practical case study, results of a computer simulation of the model as applied to the problems of synchronization and dynamic stability are provided. It is shown that the speed and output of a wind turbine can be satisfactorily controlled within reasonable limits by employing the existing blade pitch control system under specified conditions. For power control, an additional excitation control is required so that the terminal voltage, output power factor, and armature current can be held within narrow limits. As a result, the variation of torque angle is limited even if speed control is not implemented simultaneously with power control. Design features of the ERDA/NASA 100-kW Mode-O wind turbine are included.

  20. A power-efficient ZF precoding scheme for multi-user indoor visible light communication systems

    NASA Astrophysics Data System (ADS)

    Zhao, Qiong; Fan, Yangyu; Deng, Lijun; Kang, Bochao

    2017-02-01

    In this study, we propose a power-efficient ZF precoding scheme for visible light communication (VLC) downlink multi-user multiple-input-single-output (MU-MISO) systems, which incorporates the zero-forcing (ZF) and the characteristics of VLC systems. The main idea of this scheme is that the channel matrix used to perform pseudoinverse comes from the set of optical Access Points (APs) shared by more than one user, instead of the set of all involved serving APs as the existing ZF precoding schemes often used. By doing this, the waste of power, which is caused by the transmission of one user's data in the un-serving APs, can be avoided. In addition, the size of the channel matrix needs to perform pseudoinverse becomes smaller, which helps to reduce the computation complexity. Simulation results in two scenarios show that the proposed ZF precoding scheme has higher power efficiency, better bit error rate (BER) performance and lower computation complexity compared with traditional ZF precoding schemes.

  1. Power Saving Control for Battery-Powered Portable WLAN APs

    NASA Astrophysics Data System (ADS)

    Ogawa, Masakatsu; Hiraguri, Takefumi

    This paper proposes a power saving control function for battery-powered portable wireless LAN (WLAN) access points (APs) to extend the battery life. The IEEE802.11 standard does not support power saving control for APs. To enable a sleep state for an AP, the AP forces the stations (STAs) to refrain from transmitting frames using the network allocation vector (NAV) while the AP is sleeping. Thus the sleep state for the AP can be employed without causing frame loss at the STAs. Numerical analysis and computer simulation reveal that the newly proposed control technique conserves power compared to the conventional control.

  2. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-01-01

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  3. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.

    PubMed

    Marvian, Milad; Lidar, Daniel A

    2017-01-20

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  4. When power does not corrupt: superior individuation processes among powerful perceivers.

    PubMed

    Overbeck, J R; Park, B

    2001-10-01

    To examine whether powerful people fail to individuate the less powerful, the authors assigned participants to either a high-power or low-power role for a computer E-mail role play. In 3 studies, participants in the high-power role made decisions and determined the outcomes of interactions; low-power role players had no power and relied on high-power targets for outcome decisions. Studies I and 2 found that high-power perceivers better individuated low-power targets. Study 3 demonstrated that high-power role players' superior judgment can be impaired by including a task that directs their responsibility toward organizational rather than interpersonal concerns. In all, results suggest that the effect of power on social judgment may be more complex and multifaceted than has previously been acknowledged.

  5. Cluster Computing for Embedded/Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  6. An ultralow power athermal silicon modulator.

    PubMed

    Timurdogan, Erman; Sorace-Agaskar, Cheryl M; Sun, Jie; Shah Hosseini, Ehsan; Biberman, Aleksandr; Watts, Michael R

    2014-06-11

    Silicon photonics has emerged as the leading candidate for implementing ultralow power wavelength-division-multiplexed communication networks in high-performance computers, yet current components (lasers, modulators, filters and detectors) consume too much power for the high-speed femtojoule-class links that ultimately will be required. Here we demonstrate and characterize the first modulator to achieve simultaneous high-speed (25 Gb s(-1)), low-voltage (0.5 VPP) and efficient 0.9 fJ per bit error-free operation. This low-energy high-speed operation is enabled by a record electro-optic response, obtained in a vertical p-n junction device that at 250 pm V(-1) (30 GHz V(-1)) is up to 10 times larger than prior demonstrations. In addition, this record electro-optic response is used to compensate for thermal drift over a 7.5 °C temperature range with little additional energy consumption (0.24 fJ per bit for a total energy consumption below 1.03 J per bit). The combined results of highly efficient modulation and electro-optic thermal compensation represent a new paradigm in modulator development and a major step towards single-digit femtojoule-class communications.

  7. Synthetic analog computation in living cells.

    PubMed

    Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K

    2013-05-30

    A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.

  8. A computationally efficient technique to model depth, orientation and alignment via ray tracing in acoustic power transfer systems

    NASA Astrophysics Data System (ADS)

    Christensen, David B.; Basaeri, Hamid; Roundy, Shad

    2017-12-01

    In acoustic power transfer systems, a receiver is displaced from a transmitter by an axial depth, a lateral offset (alignment), and a rotation angle (orientation). In systems where the receiver’s position is not fixed, such as a receiver implanted in biological tissue, slight variations in depth, orientation, or alignment can cause significant variations in the received voltage and power. To address this concern, this paper presents a computationally efficient technique to model the effects of depth, orientation, and alignment via ray tracing (DOART) on received voltage and power in acoustic power transfer systems. DOART combines transducer circuit equivalent models, a modified version of Huygens principle, and ray tracing to simulate pressure wave propagation and reflection between a transmitter and a receiver in a homogeneous medium. A reflected grid method is introduced to calculate propagation distances, reflection coefficients, and initial vectors between a point on the transmitter and a point on the receiver for an arbitrary number of reflections. DOART convergence and simulation time per data point is discussed as a function of the number of reflections and elements chosen. Finally, experimental data is compared to DOART simulation data in terms of magnitude and shape of the received voltage signal.

  9. Digital computer technique for setup and checkout of an analog computer

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1968-01-01

    Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.

  10. 2011 Computation Directorate Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2012-04-11

    From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI

  11. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  12. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  13. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  14. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  15. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  16. Computation in generalised probabilisitic theories

    NASA Astrophysics Data System (ADS)

    Lee, Ciarán M.; Barrett, Jonathan

    2015-08-01

    From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define

  17. Application of supercomputers to computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.

    1984-01-01

    Computers are playing an increasingly important role in the field of aerodynamics such that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. Example results obtained from the successively refined forms of the governing equations are discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to problems of practical importance. Finally, the Numerical Aerodynamic Simulation (NAS) Program - with its 1988 target of achieving a sustained computational rate of 1 billion floating point operations per second and operating with a memory of 240 million words - is discussed in terms of its goals and its projected effect on the future of computational aerodynamics.

  18. Computational Fluid Dynamics Ventilation Study for the Human Powered Centrifuge at the International Space Station

    NASA Technical Reports Server (NTRS)

    Son, Chang H.

    2012-01-01

    The Human Powered Centrifuge (HPC) is a facility that is planned to be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a "bicycle" for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of about two times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin when HPC is operating. A full unsteady formulation is used for airflow and CO2 transport CFD-based modeling with the so-called sliding mesh concept when the HPC equipment with the adjacent Bay 4 cabin volume is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The rotating part of the computational domain includes also a human body model. Localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution detected is discussed.

  19. Computational tool for simulation of power and refrigeration cycles

    NASA Astrophysics Data System (ADS)

    Córdoba Tuta, E.; Reyes Orozco, M.

    2016-07-01

    Small improvement in thermal efficiency of power cycles brings huge cost savings in the production of electricity, for that reason have a tool for simulation of power cycles allows modeling the optimal changes for a best performance. There is also a big boom in research Organic Rankine Cycle (ORC), which aims to get electricity at low power through cogeneration, in which the working fluid is usually a refrigerant. A tool to design the elements of an ORC cycle and the selection of the working fluid would be helpful, because sources of heat from cogeneration are very different and in each case would be a custom design. In this work the development of a multiplatform software for the simulation of power cycles and refrigeration, which was implemented in the C ++ language and includes a graphical interface which was developed using multiplatform environment Qt and runs on operating systems Windows and Linux. The tool allows the design of custom power cycles, selection the type of fluid (thermodynamic properties are calculated through CoolProp library), calculate the plant efficiency, identify the fractions of flow in each branch and finally generates a report very educational in pdf format via the LaTeX tool.

  20. Computed tomography-based diagnosis of diffuse compensatory enlargement of coronary arteries using scaling power laws.

    PubMed

    Huo, Yunlong; Choy, Jenny Susana; Wischgoll, Thomas; Luo, Tong; Teague, Shawn D; Bhatt, Deepak L; Kassab, Ghassan S

    2013-04-06

    Glagov's positive remodelling in the early stages of coronary atherosclerosis often results in plaque rupture and acute events. Because positive remodelling is generally diffused along the epicardial coronary arterial tree, it is difficult to diagnose non-invasively. Hence, the objective of the study is to assess the use of scaling power law for the diagnosis of positive remodelling of coronary arteries based on computed tomography (CT) images. Epicardial coronary arterial trees were reconstructed from CT scans of six Ossabaw pigs fed on a high-fat, high-cholesterol, atherogenic diet for eight months as well as the same number of body-weight-matched farm pigs fed on a lean chow (101.9±16.1 versus 91.5±13.1 kg). The high-fat diet Ossabaw pig model showed diffuse positive remodelling of epicardial coronary arteries. Good fit of measured coronary data to the length-volume scaling power law ( where L(c) and V(c) are crown length and volume) were found for both the high-fat and control groups (R(2) = 0.95±0.04 and 0.99±0.01, respectively). The coefficient, K(LV), decreased significantly in the high-fat diet group when compared with the control (14.6±2.6 versus 40.9±5.6). The flow-length scaling power law, however, was nearly unaffected by the positive remodelling. The length-volume and flow-length scaling power laws were preserved in epicardial coronary arterial trees after positive remodelling. K(LV) < 18 in the length-volume scaling relation is a good index of positive remodelling of coronary arteries. These findings provide a clinical rationale for simple, accurate and non-invasive diagnosis of positive remodelling of coronary arteries, using conventional CT scans.

  1. Methods of computing steady-state voltage stability margins of power systems

    DOEpatents

    Chow, Joe Hong; Ghiocel, Scott Gordon

    2018-03-20

    In steady-state voltage stability analysis, as load increases toward a maximum, conventional Newton-Raphson power flow Jacobian matrix becomes increasingly ill-conditioned so power flow fails to converge before reaching maximum loading. A method to directly eliminate this singularity reformulates the power flow problem by introducing an AQ bus with specified bus angle and reactive power consumption of a load bus. For steady-state voltage stability analysis, the angle separation between the swing bus and AQ bus can be varied to control power transfer to the load, rather than specifying the load power itself. For an AQ bus, the power flow formulation is only made up of a reactive power equation, thus reducing the size of the Jacobian matrix by one. This reduced Jacobian matrix is nonsingular at the critical voltage point, eliminating a major difficulty in voltage stability analysis for power system operations.

  2. SELF-POWERED WIRELESS SENSOR NODE POWER MODELING BASED ON IEEE 802.11 COMMUNICATION PROTOCOL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Raymond A. DeCarlo; Lefteri H. Tsoukalas

    other considerations that must also be taken into account when computing the power consumption associated with each task. The considerations includes: number of events occurring in a fixed active time period and the duration of each event, event-information processing time, total communication time, number of retransmission, etc. Additionally, at the network level the communication of information data packets between WSNs involves collisions, latency, andretransmission, which result in unanticipated power losses. This paper presents stochastic modeling of power demand for a schedule-driven WSN utilizing Institute of Electrical and Electronics Engineers, IEEE, 802.11 communication protocols. The model captures the generic operation of a schedule-driven WSN when an external event occurs, i.e., sensing, following by processing, and followed by communication. The results are verified via simulation.« less

  3. Use of computers in dysmorphology.

    PubMed Central

    Diliberti, J H

    1988-01-01

    As a consequence of the increasing power and decreasing cost of digital computers, dysmorphologists have begun to explore a wide variety of computerised applications in clinical genetics. Of considerable interest are developments in the areas of syndrome databases, expert systems, literature searches, image processing, and pattern recognition. Each of these areas is reviewed from the perspective of the underlying computer principles, existing applications, and the potential for future developments. Particular emphasis is placed on the analysis of the tasks performed by the dysmorphologist and the design of appropriate tools to facilitate these tasks. In this context the computer and associated software are considered paradigmatically as tools for the dysmorphologist and should be designed accordingly. Continuing improvements in the ability of computers to manipulate vast amounts of data rapidly makes the development of increasingly powerful tools for the dysmorphologist highly probable. PMID:3050092

  4. Modeling of Thermoelectric Generator Power Characteristics for Motorcycle-Type Engines

    NASA Astrophysics Data System (ADS)

    Osipkov, Alexey; Poshekhonov, Roman; Arutyunyan, Georgy; Basov, Andrey; Safonov, Roman

    2017-10-01

    Thermoelectric generation in vehicles such as motorcycles, all-terrain vehicles, and snowmobiles opens the possibility of additional electrical energy generation by means of exhaust heat utilization. This is beneficial because replacing the mechanical generator used in such vehicles with a more powerful one in cases of electrical power deficiency is impossible. This paper proposes a calculation model for the thermoelectric generator (TEG) operational characteristics of the low-capacity internal combustion engines used in these vehicles. Two TEG structures are considered: (1) TEG with air cooling and (2) TEG with water cooling. Modeling consists of two calculation stages. In the first stage, the heat exchange coefficients of the hot and cold exchangers are determined using computational fluid dynamics. In the second stage, the TEG operational characteristics are modeled based on the nonlinear equations of the heat transfer and power balance. On the basis of the modeling results, the dependence of the TEG's major operating characteristics (such as the electrical power generated by the TEG and its efficiency and mass) on operating conditions or design parameters is determined. For example, the electrical power generated by a TEG for a Yamaha WR450F motorcycle engine with a volume of 0.449 × 10-3 m3 was calculated to be as much as 100 W. Use of the TEG arrangements proposed is justified by the additional electrical power generation for small capacity vehicles, without the need for internal combustion engine redesign.

  5. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  6. Computer graphics applications to crew displays

    NASA Technical Reports Server (NTRS)

    Wyzkoski, J.

    1983-01-01

    Astronauts are provided much data and information via the monochrome CRT displays on the orbiter. For this project two areas were investigated for the possible introduction of computer graphics to enhance and extend the utility of these displays. One involved reviewing the current orbiter displays and identifying those which could be improved via computer graphics. As an example, the tabular data on electrical power distribution and control was enhanced by the addition of color and bar charts. The other dealt with the development of an aid to berthing a payload with the Remote Manipulator System (RMS). This aid consists of a graphics display of the top, front and side views of the payload and cargo bay and point of resolution (POR) position and attitude data for the current location of the payload. The initial implementation was on an IBM PC clone. The demonstration software installed in the Johnson Space Center Manipulator Development Facility (MD) was reviewed. Due to current hardware limitations, the MDF verision is slow, i.e., about a 40+ seond update rate and, hence, not real-time. Despite this fact, the evaluation of this additional visual cue as an RMS operator aid indicates that this display, with modifications for speed, etc., can assist the crew. Further development is appropriate.

  7. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    NASA Technical Reports Server (NTRS)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  8. Low Power Computing in Distributed Systems

    DTIC Science & Technology

    2006-04-01

    performance applications. It has been adopted in embedded systems such as the Stargate from Crossbow [15] and the PASTA 4 0 0.1 0.2 0.3 0.4 (A) flo at...current consumption of the Stargate board is measured by an Agilent digital multimeter 34401A. The digital multimeter is connected with the PC for data...floating point operation vs. integer operation Power supply Digital multimeter Stargate board with Xscale processor 5 2.2 Library math function vs

  9. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  10. Computer program for thermodynamic analysis of open cycle multishaft power system with multiple reheat and intercool

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1974-01-01

    A computer program to analyze power systems having any number of shafts up to a maximum of five is presented. On each shaft there can be as many as five compressors and five turbines, along with any specified number of intervening intercoolers and reheaters. A recuperator can be included. Turbine coolant flow can be accounted for. Any fuel consisting entirely of hydrogen and/or carbon can be used. The program is valid for maximum temperatures up to about 2000 K (3600 R). The system description, the analysis method, a detailed explanation of program input and output including an illustrative example, a dictionary of program variables, and the program listing are explained.

  11. Systematic Computation of Nonlinear Cellular and Molecular Dynamics with Low-Power CytoMimetic Circuits: A Simulation Study

    PubMed Central

    Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550

  12. Fuel cells provide a revenue-generating solution to power quality problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, J.M. Jr.

    Electric power quality and reliability are becoming increasingly important as computers and microprocessors assume a larger role in commercial, health care and industrial buildings and processes. At the same time, constraints on transmission and distribution of power from central stations are making local areas vulnerable to low voltage, load addition limitations, power quality and power reliability problems. Many customers currently utilize some form of premium power in the form of standby generators and/or UPS systems. These include customers where continuous power is required because of health and safety or security reasons (hospitals, nursing homes, places of public assembly, air trafficmore » control, military installations, telecommunications, etc.) These also include customers with industrial or commercial processes which can`t tolerance an interruption of power because of product loss or equipment damage. The paper discusses the use of the PC25 fuel cell power plant for backup and parallel power supplies for critical industrial applications. Several PC25 installations are described: the use of propane in a PC25; the use by rural cooperatives; and a demonstration of PC25 technology using landfill gas.« less

  13. System-wide power management control via clock distribution network

    DOEpatents

    Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

    2015-05-19

    An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

  14. Student Computer Dialogs Without Special Purpose Languages.

    ERIC Educational Resources Information Center

    Bork, Alfred

    The phrase "student computer dialogs" refers to interactive sessions between the student and the computer. Rather than using programing languages specifically designed for computer assisted instruction (CAI), existing general purpose languages should be emphasized in the future development of student computer dialogs, as the power and…

  15. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  16. Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms

    DOE PAGES

    Roald, Line Alnaes; Andersson, Goran

    2017-08-29

    Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less

  17. Computer Recreations.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1989-01-01

    Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)

  18. Power system

    DOEpatents

    Hickam, Christopher Dale [Glasford, IL

    2008-03-18

    A power system includes a prime mover, a transmission, and a fluid coupler having a selectively engageable lockup clutch. The fluid coupler may be drivingly connected between the prime mover and the transmission. Additionally, the power system may include a motor/generator drivingly connected to at least one of the prime mover and the transmission. The power-system may also include power-system controls configured to execute a control method. The control method may include selecting one of a plurality of modes of operation of the power system. Additionally, the control method may include controlling the operating state of the lockup clutch dependent upon the mode of operation selected. The control method may also include controlling the operating state of the motor/generator dependent upon the mode of operation selected.

  19. Additional Security Considerations for Grid Management

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.

    2003-01-01

    The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.

  20. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number

  1. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  2. Computations of unsteady multistage compressor flows in a workstation environment

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen L.

    1992-01-01

    High-end graphics workstations are becoming a necessary tool in the computational fluid dynamics environment. In addition to their graphic capabilities, workstations of the latest generation have powerful floating-point-operation capabilities. As workstations become common, they could provide valuable computing time for such applications as turbomachinery flow calculations. This report discusses the issues involved in implementing an unsteady, viscous multistage-turbomachinery code (STAGE-2) on workstations. It then describes work in which the workstation version of STAGE-2 was used to study the effects of axial-gap spacing on the time-averaged and unsteady flow within a 2 1/2-stage compressor. The results included time-averaged surface pressures, time-averaged pressure contours, standard deviation of pressure contours, pressure amplitudes, and force polar plots.

  3. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  4. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  5. Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.

    2011-01-01

    A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.

  6. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  7. Comparison of Computational and Experimental Results for a Transonic Variable-Speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David; Flegel, Ashlie

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  8. Comparison of Computational and Experimental Results for a Transonic Variable-speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David T.; Flegel, Ashlie B.

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  9. Cyanoethylated compounds as additives in lithium/lithium batteries

    DOEpatents

    Nagasubramanian, Ganesan

    1999-01-01

    The power loss of lithium/lithium ion battery cells is significantly reduced, especially at low temperatures, when about 1% by weight of an additive is incorporated in the electrolyte layer of the cells. The usable additives are organic solvent soluble cyanoethylated polysaccharides and poly(vinyl alcohol). The power loss decrease results primarily from the decrease in the charge transfer resistance at the interface between the electrolyte and the cathode.

  10. Bounds on the power of proofs and advice in general physical theories.

    PubMed

    Lee, Ciarán M; Hoban, Matty J

    2016-06-01

    Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.

  11. Demonstration of blind quantum computing.

    PubMed

    Barz, Stefanie; Kashefi, Elham; Broadbent, Anne; Fitzsimons, Joseph F; Zeilinger, Anton; Walther, Philip

    2012-01-20

    Quantum computers, besides offering substantial computational speedups, are also expected to preserve the privacy of a computation. We present an experimental demonstration of blind quantum computing in which the input, computation, and output all remain unknown to the computer. We exploit the conceptual framework of measurement-based quantum computation that enables a client to delegate a computation to a quantum server. Various blind delegated computations, including one- and two-qubit gates and the Deutsch and Grover quantum algorithms, are demonstrated. The client only needs to be able to prepare and transmit individual photonic qubits. Our demonstration is crucial for unconditionally secure quantum cloud computing and might become a key ingredient for real-life applications, especially when considering the challenges of making powerful quantum computers widely available.

  12. Numerical computation of spherical harmonics of arbitrary degree and order by extending exponent of floating point numbers

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2012-04-01

    By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.

  13. Power quality load management for large spacecraft electrical power systems

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.

    1988-01-01

    In December, 1986, a Center Director's Discretionary Fund (CDDF) proposal was granted to study power system control techniques in large space electrical power systems. Presented are the accomplishments in the area of power system control by power quality load management. In addition, information concerning the distortion problems in a 20 kHz ac power system is presented.

  14. Power limits for microbial life.

    PubMed

    LaRowe, Douglas E; Amend, Jan P

    2015-01-01

    To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).

  15. Application of Hybrid Optimization-Expert System for Optimal Power Management on Board Space Power Station

    NASA Technical Reports Server (NTRS)

    Momoh, James; Chattopadhyay, Deb; Basheer, Omar Ali AL

    1996-01-01

    The space power system has two sources of energy: photo-voltaic blankets and batteries. The optimal power management problem on-board has two broad operations: off-line power scheduling to determine the load allocation schedule of the next several hours based on the forecast of load and solar power availability. The nature of this study puts less emphasis on speed requirement for computation and more importance on the optimality of the solution. The second category problem, on-line power rescheduling, is needed in the event of occurrence of a contingency to optimally reschedule the loads to minimize the 'unused' or 'wasted' energy while keeping the priority on certain type of load and minimum disturbance of the original optimal schedule determined in the first-stage off-line study. The computational performance of the on-line 'rescheduler' is an important criterion and plays a critical role in the selection of the appropriate tool. The Howard University Center for Energy Systems and Control has developed a hybrid optimization-expert systems based power management program. The pre-scheduler has been developed using a non-linear multi-objective optimization technique called the Outer Approximation method and implemented using the General Algebraic Modeling System (GAMS). The optimization model has the capability of dealing with multiple conflicting objectives viz. maximizing energy utilization, minimizing the variation of load over a day, etc. and incorporates several complex interaction between the loads in a space system. The rescheduling is performed using an expert system developed in PROLOG which utilizes a rule-base for reallocation of the loads in an emergency condition viz. shortage of power due to solar array failure, increase of base load, addition of new activity, repetition of old activity etc. Both the modules handle decision making on battery charging and discharging and allocation of loads over a time-horizon of a day divided into intervals of 10

  16. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing

  17. Interdisciplinary Team-Teaching Experience for a Computer and Nuclear Energy Course for Electrical and Computer Engineering Students

    ERIC Educational Resources Information Center

    Kim, Charles; Jackson, Deborah; Keiller, Peter

    2016-01-01

    A new, interdisciplinary, team-taught course has been designed to educate students in Electrical and Computer Engineering (ECE) so that they can respond to global and urgent issues concerning computer control systems in nuclear power plants. This paper discusses our experience and assessment of the interdisciplinary computer and nuclear energy…

  18. Addition of acetate improves stability of power generation using microbial fuel cells treating domestic wastewater.

    PubMed

    Stager, Jennifer L; Zhang, Xiaoyuan; Logan, Bruce E

    2017-12-01

    Power generation using microbial fuel cells (MFCs) must provide stable, continuous conversion of organic matter in wastewaters into electricity. However, when relatively small diameter (0.8cm) graphite fiber brush anodes were placed close to the cathodes in MFCs, power generation was unstable during treatment of low strength domestic wastewater. One reactor produced 149mW/m 2 before power generation failed, while the other reactor produced 257mW/m 2 , with both reactors exhibiting severe power overshoot in polarization tests. Using separators or activated carbon cathodes did not result in stable operation as the reactors continued to exhibit power overshoot based on polarization tests. However, adding acetate (1g/L) to the wastewater produced stable performance during fed batch and continuous flow operation, and there was no power overshoot in polarization tests. These results highlight the importance of wastewater strength and brush anode size for producing stable and continuous power in compact MFCs. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Power Management and SRAM for Energy-Autonomous and Low-Power Systems

    NASA Astrophysics Data System (ADS)

    Chen, Gregory K.

    We demonstrate the two first-known, complete, self-powered millimeter-scale computer systems. These microsystems achieve zero-net-energy operation using solar energy harvesting and ultra-low-power circuits. A medical implant for monitoring intraocular pressure (IOP) is presented as part of a treatment for glaucoma. The 1.5mm3 IOP monitor is easily implantable because of its small size and measures IOP with 0.5mmHg accuracy. It wirelessly transmits data to an external wand while consuming 4.70nJ/bit. This provides rapid feedback about treatment efficacies to decrease physician response time and potentially prevent unnecessary vision loss. A nearly-perpetual temperature sensor is presented that processes data using a 2.1muW near-threshold ARMRTM Cortex-M3(TM) muP that provides a widely-used and trusted programming platform. Energy harvesting and power management techniques for these two microsystems enable energy-autonomous operation. The IOP monitor harvests 80nW of solar power while consuming only 5.3nW, extending lifetime indefinitely. This allows the device to provide medical information for extended periods of time, giving doctors time to converge upon the best glaucoma treatment. The temperature sensor uses on-demand power delivery to improve low-load dc-dc voltage conversion efficiency by 4.75x. It also performs linear regulation to deliver power with low noise, improved load regulation, and tight line regulation. Low-power high-throughput SRAM techniques help millimeter-scale microsystems meet stringent power budgets. VDD scaling in memory decreases energy per access, but also decreases stability margins. These margins can be improved using sizing, VTH selection, and assist circuits, as well as new bitcell designs. Adaptive Crosshairs modulation of SRAM power supplies fixes 70% of parametric failures. Half-differential SRAM design improves stability, reducing VMIN by 72mV. The circuit techniques for energy autonomy presented in this dissertation enable

  20. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  1. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  2. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound

  3. An ultralow power athermal silicon modulator

    PubMed Central

    Timurdogan, Erman; Sorace-Agaskar, Cheryl M.; Sun, Jie; Shah Hosseini, Ehsan; Biberman, Aleksandr; Watts, Michael R.

    2014-01-01

    Silicon photonics has emerged as the leading candidate for implementing ultralow power wavelength–division–multiplexed communication networks in high-performance computers, yet current components (lasers, modulators, filters and detectors) consume too much power for the high-speed femtojoule-class links that ultimately will be required. Here we demonstrate and characterize the first modulator to achieve simultaneous high-speed (25 Gb s−1), low-voltage (0.5 VPP) and efficient 0.9 fJ per bit error-free operation. This low-energy high-speed operation is enabled by a record electro-optic response, obtained in a vertical p–n junction device that at 250 pm V−1 (30 GHz V−1) is up to 10 times larger than prior demonstrations. In addition, this record electro-optic response is used to compensate for thermal drift over a 7.5 °C temperature range with little additional energy consumption (0.24 fJ per bit for a total energy consumption below 1.03 J per bit). The combined results of highly efficient modulation and electro-optic thermal compensation represent a new paradigm in modulator development and a major step towards single-digit femtojoule-class communications. PMID:24915772

  4. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  5. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  6. Computational power and generative capacity of genetic systems.

    PubMed

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E

    2016-01-01

    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  8. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  9. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  10. User interface design principles for the SSM/PMAD automated power system

    NASA Technical Reports Server (NTRS)

    Jakstas, Laura M.; Myers, Chris J.

    1991-01-01

    Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.

  11. Managing Power Heterogeneity

    NASA Astrophysics Data System (ADS)

    Pruhs, Kirk

    A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.

  12. Reducing power usage on demand

    NASA Astrophysics Data System (ADS)

    Corbett, G.; Dewhurst, A.

    2016-10-01

    The Science and Technology Facilities Council (STFC) datacentre provides large- scale High Performance Computing facilities for the scientific community. It currently consumes approximately 1.5MW and this has risen by 25% in the past two years. STFC has been investigating leveraging preemption in the Tier 1 batch farm to save power. HEP experiments are increasing using jobs that can be killed to take advantage of opportunistic CPU resources or novel cost models such as Amazon's spot pricing. Additionally, schemes from energy providers are available that offer financial incentives to reduce power consumption at peak times. Under normal operating conditions, 3% of the batch farm capacity is wasted due to draining machines. By using preempt-able jobs, nodes can be rapidly made available to run multicore jobs without this wasted resource. The use of preempt-able jobs has been extended so that at peak times machines can be hibernated quickly to save energy. This paper describes the implementation of the above and demonstrates that STFC could in future take advantage of such energy saving schemes.

  13. Development of a small-scale computer cluster

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  14. Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun

    2017-01-01

    The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…

  15. QMC Goes BOINC: Using Public Resource Computing to Perform Quantum Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Rainey, Cameron; Engelhardt, Larry; Schröder, Christian; Hilbig, Thomas

    2008-10-01

    Theoretical modeling of magnetic molecules traditionally involves the diagonalization of quantum Hamiltonian matrices. However, as the complexity of these molecules increases, the matrices become so large that this process becomes unusable. An additional challenge to this modeling is that many repetitive calculations must be performed, further increasing the need for computing power. Both of these obstacles can be overcome by using a quantum Monte Carlo (QMC) method and a distributed computing project. We have recently implemented a QMC method within the Spinhenge@home project, which is a Public Resource Computing (PRC) project where private citizens allow part-time usage of their PCs for scientific computing. The use of PRC for scientific computing will be described in detail, as well as how you can contribute to the project. See, e.g., L. Engelhardt, et. al., Angew. Chem. Int. Ed. 47, 924 (2008). C. Schröoder, in Distributed & Grid Computing - Science Made Transparent for Everyone. Principles, Applications and Supporting Communities. (Weber, M.H.W., ed., 2008). Project URL: http://spin.fh-bielefeld.de

  16. FY17 CSSE L2 Milestone Report: Analyzing Power Usage Characteristics of Workloads Running on Trinity.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin

    This report summarizes the work performed as part of a FY17 CSSE L2 milestone to in- vestigate the power usage behavior of ASC workloads running on the ATS-1 Trinity plat- form. Techniques were developed to instrument application code regions of interest using the Power API together with the Kokkos profiling interface and Caliper annotation library. Experiments were performed to understand the power usage behavior of mini-applications and the SNL/ATDM SPARC application running on ATS-1 Trinity Haswell and Knights Landing compute nodes. A taxonomy of power measurement approaches was identified and presented, providing a guide for application developers to follow. Controlledmore » scaling study experiments were performed on up to 2048 nodes of Trinity along with smaller scale ex- periments on Trinity testbed systems. Additionally, power and energy system monitoring information from Trinity was collected and archived for post analysis of "in-the-wild" work- loads. Results were analyzed to assess the sensitivity of the workloads to ATS-1 compute node type (Haswell vs. Knights Landing), CPU frequency control, node-level power capping control, OpenMP configuration, Knights Landing on-package memory configuration, and algorithm/solver configuration. Overall, this milestone lays groundwork for addressing the long-term goal of determining how to best use and operate future ASC platforms to achieve the greatest benefit subject to a constrained power budget.« less

  17. Computational analysis of nonlinearities within dynamics of cable-based driving systems

    NASA Astrophysics Data System (ADS)

    Anghelache, G. D.; Nastac, S.

    2017-08-01

    This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.

  18. Coherent addition of high power broad-area laser diodes with a compact VBG V-shaped external Talbot cavity

    DOE PAGES

    Liu, Bo; Braiman, Yehuda

    2018-02-06

    In this paper, we introduced a compact V-shaped external Talbot cavity for phase locking of high power broad-area laser diodes. The length of compact cavity is ~25 mm. Near diffraction-limit coherent addition of 10 broad-area laser diodes indicated that high quality phase locking was achieved. We measured the near-field emission mode of each individual broad-area laser diode with different feedback, such as a volume Bragg grating and a high reflection mirror. Finally, we found out that the best result of phase locking broad-area laser diodes was achieved by the compact V-shaped external Talbot cavity with volume Bragg grating feedback.

  19. Coherent addition of high power broad-area laser diodes with a compact VBG V-shaped external Talbot cavity

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Braiman, Yehuda

    2018-05-01

    We introduced a compact V-shaped external Talbot cavity for phase locking of high power broad-area laser diodes. The length of compact cavity is ∼25 mm. Near diffraction-limit coherent addition of 10 broad-area laser diodes indicated that high quality phase locking was achieved. We measured the near-field emission mode of each individual broad-area laser diode with different feedback, such as a volume Bragg grating and a high reflection mirror. We found out that the best result of phase locking broad-area laser diodes was achieved by the compact V-shaped external Talbot cavity with volume Bragg grating feedback.

  20. Coherent addition of high power broad-area laser diodes with a compact VBG V-shaped external Talbot cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bo; Braiman, Yehuda

    In this paper, we introduced a compact V-shaped external Talbot cavity for phase locking of high power broad-area laser diodes. The length of compact cavity is ~25 mm. Near diffraction-limit coherent addition of 10 broad-area laser diodes indicated that high quality phase locking was achieved. We measured the near-field emission mode of each individual broad-area laser diode with different feedback, such as a volume Bragg grating and a high reflection mirror. Finally, we found out that the best result of phase locking broad-area laser diodes was achieved by the compact V-shaped external Talbot cavity with volume Bragg grating feedback.

  1. Evaluation of direct and indirect additive manufacture of maxillofacial prostheses.

    PubMed

    Eggbeer, Dominic; Bibb, Richard; Evans, Peter; Ji, Lu

    2012-09-01

    The efficacy of computer-aided technologies in the design and manufacture of maxillofacial prostheses has not been fully proven. This paper presents research into the evaluation of direct and indirect additive manufacture of a maxillofacial prosthesis against conventional laboratory-based techniques. An implant/magnet-retained nasal prosthesis case from a UK maxillofacial unit was selected as a case study. A benchmark prosthesis was fabricated using conventional laboratory-based techniques for comparison against additive manufactured prostheses. For the computer-aided workflow, photogrammetry, computer-aided design and additive manufacture (AM) methods were evaluated in direct prosthesis body fabrication and indirect production using an additively manufactured mould. Qualitative analysis of position, shape, colour and edge quality was undertaken. Mechanical testing to ISO standards was also used to compare the silicone rubber used in the conventional prosthesis with the AM material. Critical evaluation has shown that utilising a computer-aided work-flow can produce a prosthesis body that is comparable to that produced using existing best practice. Technical limitations currently prevent the direct fabrication method demonstrated in this paper from being clinically viable. This research helps prosthesis providers understand the application of a computer-aided approach and guides technology developers and researchers to address the limitations identified.

  2. Applications of NMR and computational methodologies to study protein dynamics.

    PubMed

    Narayanan, Chitra; Bafna, Khushboo; Roux, Louise D; Agarwal, Pratul K; Doucet, Nicolas

    2017-08-15

    Overwhelming evidence now illustrates the defining role of atomic-scale protein flexibility in biological events such as allostery, cell signaling, and enzyme catalysis. Over the years, spin relaxation nuclear magnetic resonance (NMR) has provided significant insights on the structural motions occurring on multiple time frames over the course of a protein life span. The present review article aims to illustrate to the broader community how this technique continues to shape many areas of protein science and engineering, in addition to being an indispensable tool for studying atomic-scale motions and functional characterization. Continuing developments in underlying NMR technology alongside software and hardware developments for complementary computational approaches now enable methodologies to routinely provide spatial directionality and structural representations traditionally harder to achieve solely using NMR spectroscopy. In addition to its well-established role in structural elucidation, we present recent examples that illustrate the combined power of selective isotope labeling, relaxation dispersion experiments, chemical shift analyses, and computational approaches for the characterization of conformational sub-states in proteins and enzymes. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Automated Power Systems Management (APSM)

    NASA Technical Reports Server (NTRS)

    Bridgeforth, A. O.

    1981-01-01

    A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.

  4. Power distribution for electron beam welding

    NASA Technical Reports Server (NTRS)

    Edwards, E.

    1980-01-01

    The power distribution of an electron seam is analyzed. Digital computer techniques are used to evaluate the radial distribution of power detected by a wire probe circulating through the beam. Results are reported.

  5. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms

    PubMed Central

    He, Li; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546

  6. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.

    PubMed

    Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.

  7. Mobile computing device configured to compute irradiance, glint, and glare of the sun

    DOEpatents

    Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

    2014-03-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

  8. Computer Network Operations Methodology

    DTIC Science & Technology

    2004-03-01

    means of their computer information systems. Disrupt - This type of attack focuses on disrupting as “attackers might surreptitiously reprogram enemy...by reprogramming the computers that control distribution within the power grid. A disruption attack introduces disorder and inhibits the effective...between commanders. The use of methodologies is widespread and done subconsciously to assist individuals in decision making. The processes that

  9. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  10. An assessment of future computer system needs for large-scale computation

    NASA Technical Reports Server (NTRS)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  11. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  12. Additive manufacturing: Toward holistic design

    DOE PAGES

    Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.; ...

    2017-03-18

    Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.

  13. Evaluation of the operating resource of the most loaded rotor element of the additional steam turbine with steam-hydrogen overheat of the working fluid at a nuclear power station

    NASA Astrophysics Data System (ADS)

    Bairamov, A. N.

    2017-11-01

    The operation of a nuclear power plant with a hydrogen energy complex and a constantly operating low capacity additional steam turbine makes it possible to improve the reliability of the power supply to the needs of a nuclear power plant in the face of major systemic accidents. In this case, the additional steam turbine is always in operation. This determines the alternation of the operating conditions of the additional steam turbine, and, at the same time, the alternation of the loads attributable to the rotor, which affects its working life. The aim of the article is to investigate the effect of cyclic loads on the number of cycles before the destruction of the most important elements of the rotor of an additional steam turbine due to the alternation of operating conditions when entering the peak load and during unloading at night. The article demonstrates that the values of the stress range intensity index for the most important elements of the rotor of an additional steam turbine lie in the area of the threshold values of the fatigue failure diagram. For this region, an increase in the frequency of loading is associated with the phenomenon of closure of the fatigue crack and, as a consequence, a possible slowing of its growth. An approximate number of cycles before failure for the most loaded element of the rotor is obtained.

  14. Large fluctuations of the macroscopic current in diffusive systems: a numerical test of the additivity principle.

    PubMed

    Hurtado, Pablo I; Garrido, Pedro L

    2010-04-01

    Most systems, when pushed out of equilibrium, respond by building up currents of locally conserved observables. Understanding how microscopic dynamics determines the averages and fluctuations of these currents is one of the main open problems in nonequilibrium statistical physics. The additivity principle is a theoretical proposal that allows to compute the current distribution in many one-dimensional nonequilibrium systems. Using simulations, we validate this conjecture in a simple and general model of energy transport, both in the presence of a temperature gradient and in canonical equilibrium. In particular, we show that the current distribution displays a Gaussian regime for small current fluctuations, as prescribed by the central limit theorem, and non-Gaussian (exponential) tails for large current deviations, obeying in all cases the Gallavotti-Cohen fluctuation theorem. In order to facilitate a given current fluctuation, the system adopts a well-defined temperature profile different from that of the steady state and in accordance with the additivity hypothesis predictions. System statistics during a large current fluctuation is independent of the sign of the current, which implies that the optimal profile (as well as higher-order profiles and spatial correlations) are invariant upon current inversion. We also demonstrate that finite-time joint fluctuations of the current and the profile are well described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.

  15. 78 FR 47804 - Verification, Validation, Reviews, and Audits for Digital Computer Software Used in Safety...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...

  16. Models of optical quantum computing

    NASA Astrophysics Data System (ADS)

    Krovi, Hari

    2017-03-01

    I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  17. Laser output power stabilization for direct laser writing system by using an acousto-optic modulator.

    PubMed

    Kim, Dong Ik; Rhee, Hyug-Gyo; Song, Jae-Bong; Lee, Yun-Woo

    2007-10-01

    We present experimental results on the output power stabilization of an Ar(+) laser for a direct laser writing system (LWS). Instability of the laser output power in the LWS cause resolution fluctuations of being fabricated diffractive optical elements or computer-generated holograms. For the purpose of reducing the power fluctuations, we have constituted a feedback loop with an acousto-optic modulator, a photodetector, and a servo controller. In this system, we have achieved the stability of +/-0.20% for 12 min and the relative intensity noise level of 2.1 x 10(-7) Hz(-12) at 100 Hz. In addition, we applied our system to a 2 mW internal mirror He-Ne laser. As a consequence, we achieved the output power stability of +/-0.12% for 25 min.

  18. Computers in Electrical Engineering Education at Virginia Polytechnic Institute.

    ERIC Educational Resources Information Center

    Bennett, A. Wayne

    1982-01-01

    Discusses use of computers in Electrical Engineering (EE) at Virginia Polytechnic Institute. Topics include: departmental background, level of computing power using large scale systems, mini and microcomputers, use of digital logic trainers and analog/hybrid computers, comments on integrating computers into EE curricula, and computer use in…

  19. 2007 Wholesale Power Rate Case Initial Proposal : Wholesale Power Rate Development Study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    United States. Bonneville Power Administration.

    The Wholesale Power Rate Development Study (WPRDS) calculates BPA proposed rates based on information either developed in the WPRDS or supplied by the other studies that comprise the BPA rate proposal. All of these studies, and accompanying documentation, provide the details of computations and assumptions. In general, information about loads and resources is provided by the Load Resource Study (LRS), WP-07-E-BPA-01, and the LRS Documentation, WP-07-E-BPA-01A. Revenue requirements information, as well as the Planned Net Revenues for Risk (PNNR), is provided in the Revenue Requirement Study, WP-07-E-BPA-02, and its accompanying Revenue Requirement Study Documentation, WP-07-E-BPA-02A and WP-07-E-BPA-02B. The Market Pricemore » Forecast Study (MPFS), WP-07-E-BPA-03, and the MPFS Documentation, WP-07-E-BPA-03A, provide the WPRDS with information regarding seasonal and diurnal differentiation of energy rates, as well information regarding monthly market prices for Demand Rates. In addition, this study provides information for the pricing of unbundled power products. The Risk Analysis Study, WP-07-E-BPA-04, and the Risk Analysis Study Documentation, WP-07-E-BPA-04A, provide short-term balancing purchases as well as secondary energy sales and revenue. The Section 7(b)(2) Rate Test Study, WP-07-E-BPA-06, and the Section 7(b)(2) Rate Test Study Documentation, WP-07-E-BPA-06A, implement Section 7(b)(2) of the Northwest Power Act to ensure that BPA preference customers firm power rates applied to their general requirements are no higher than rates calculated using specific assumptions in the Northwest Power Act.« less

  20. Where next for the reproducibility agenda in computational biology?

    PubMed

    Lewis, Joanna; Breeze, Charles E; Charlesworth, Jane; Maclaren, Oliver J; Cooper, Jonathan

    2016-07-15

    The concept of reproducibility is a foundation of the scientific method. With the arrival of fast and powerful computers over the last few decades, there has been an explosion of results based on complex computational analyses and simulations. The reproducibility of these results has been addressed mainly in terms of exact replicability or numerical equivalence, ignoring the wider issue of the reproducibility of conclusions through equivalent, extended or alternative methods. We use case studies from our own research experience to illustrate how concepts of reproducibility might be applied in computational biology. Several fields have developed 'minimum information' checklists to support the full reporting of computational simulations, analyses and results, and standardised data formats and model description languages can facilitate the use of multiple systems to address the same research question. We note the importance of defining the key features of a result to be reproduced, and the expected agreement between original and subsequent results. Dynamic, updatable tools for publishing methods and results are becoming increasingly common, but sometimes come at the cost of clear communication. In general, the reproducibility of computational research is improving but would benefit from additional resources and incentives. We conclude with a series of linked recommendations for improving reproducibility in computational biology through communication, policy, education and research practice. More reproducible research will lead to higher quality conclusions, deeper understanding and more valuable knowledge.

  1. Polylactides in additive biomanufacturing.

    PubMed

    Poh, Patrina S P; Chhaya, Mohit P; Wunner, Felix M; De-Juan-Pardo, Elena M; Schilling, Arndt F; Schantz, Jan-Thorsten; van Griensven, Martijn; Hutmacher, Dietmar W

    2016-12-15

    New advanced manufacturing technologies under the alias of additive biomanufacturing allow the design and fabrication of a range of products from pre-operative models, cutting guides and medical devices to scaffolds. The process of printing in 3 dimensions of cells, extracellular matrix (ECM) and biomaterials (bioinks, powders, etc.) to generate in vitro and/or in vivo tissue analogue structures has been termed bioprinting. To further advance in additive biomanufacturing, there are many aspects that we can learn from the wider additive manufacturing (AM) industry, which have progressed tremendously since its introduction into the manufacturing sector. First, this review gives an overview of additive manufacturing and both industry and academia efforts in addressing specific challenges in the AM technologies to drive toward AM-enabled industrial revolution. After which, considerations of poly(lactides) as a biomaterial in additive biomanufacturing are discussed. Challenges in wider additive biomanufacturing field are discussed in terms of (a) biomaterials; (b) computer-aided design, engineering and manufacturing; (c) AM and additive biomanufacturing printers hardware; and (d) system integration. Finally, the outlook for additive biomanufacturing was discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Challenges and Security in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Chang, Hyokyung; Choi, Euiin

    People who live in this world want to solve any problems as they happen then. An IT technology called Ubiquitous computing should help the situations easier and we call a technology which makes it even better and powerful cloud computing. Cloud computing, however, is at the stage of the beginning to implement and use and it faces a lot of challenges in technical matters and security issues. This paper looks at the cloud computing security.

  3. Mission: Define Computer Literacy. The Illinois-Wisconsin ISACS Computer Coordinators' Committee on Computer Literacy Report (May 1985).

    ERIC Educational Resources Information Center

    Computing Teacher, 1985

    1985-01-01

    Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…

  4. High-Performance Computing Data Center Power Usage Effectiveness |

    Science.gov Websites

    Power Usage Effectiveness When the Energy Systems Integration Facility (ESIF) was conceived, NREL set an , ventilation, and air conditioning (HVAC), which captures fan walls, fan coils that support the data center

  5. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  6. Cloud Computing for radiologists

    PubMed Central

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  7. Intermediate-sized natural gas fueled carbonate fuel cell power plants

    NASA Astrophysics Data System (ADS)

    Sudhoff, Frederick A.; Fleming, Donald K.

    1994-04-01

    This executive summary of the report describes the accomplishments of the joint US Department of Energy's (DOE) Morgantown Energy Technology Center (METC) and M-C POWER Corporation's Cooperative Research and Development Agreement (CRADA) No. 93-013. This study addresses the intermediate power plant size between 2 megawatt (MW) and 200 MW. A 25 MW natural-gas, fueled-carbonate fuel cell power plant was chosen for this purpose. In keeping with recent designs, the fuel cell will operate under approximately three atmospheres of pressure. An expander/alternator is utilized to expand exhaust gas to atmospheric conditions and generate additional power. A steam-bottoming cycle is not included in this study because it is not believed to be cost effective for this system size. This study also addresses the simplicity and accuracy of a spreadsheet-based simulation with that of a full Advanced System for Process Engineering (ASPEN) simulation. The personal computer can fully utilize the simple spreadsheet model simulation. This model can be made available to all users and is particularly advantageous to the small business user.

  8. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    PubMed

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  9. Computer Assisted Instructional Design for Computer-Based Instruction. Final Report. Working Papers.

    ERIC Educational Resources Information Center

    Russell, Daniel M.; Pirolli, Peter

    Recent advances in artificial intelligence and the cognitive sciences have made it possible to develop successful intelligent computer-aided instructional systems for technical and scientific training. In addition, computer-aided design (CAD) environments that support the rapid development of such computer-based instruction have also been recently…

  10. Computer Use and Computer Anxiety in Older Korean Americans.

    PubMed

    Yoon, Hyunwoo; Jang, Yuri; Xie, Bo

    2016-09-01

    Responding to the limited literature on computer use in ethnic minority older populations, the present study examined predictors of computer use and computer anxiety in older Korean Americans. Separate regression models were estimated for computer use and computer anxiety with the common sets of predictors: (a) demographic variables (age, gender, marital status, and education), (b) physical health indicators (chronic conditions, functional disability, and self-rated health), and (c) sociocultural factors (acculturation and attitudes toward aging). Approximately 60% of the participants were computer-users, and they had significantly lower levels of computer anxiety than non-users. A higher likelihood of computer use and lower levels of computer anxiety were commonly observed among individuals with younger age, male gender, advanced education, more positive ratings of health, and higher levels of acculturation. In addition, positive attitudes toward aging were found to reduce computer anxiety. Findings provide implications for developing computer training and education programs for the target population. © The Author(s) 2015.

  11. High power communication satellites power systems study

    NASA Astrophysics Data System (ADS)

    Josloff, Allan T.; Peterson, Jerry R.

    1995-01-01

    This paper discusses a planned study to evaluate the commercial attractiveness of high power communication satellites and assesses the attributes of both conventional photovoltaic and reactor power systems. These high power satellites can play a vital role in assuring availability of universally accessible, wide bandwidth communications, for high definition TV, super computer networks and other services. Satellites are ideally suited to provide the wide bandwidths and data rates required and are unique in the ability to provide services directly to the users. As new or relocated markets arise, satellites offer a flexibility that conventional distribution services cannot match, and it is no longer necessary to be near population centers to take advantage of the telecommunication revolution. The geopolitical implications of these substantially enhanced communications capabilities can be significant.

  12. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    NASA Astrophysics Data System (ADS)

    Kaita, R.; Ignat, D. W.; Jardin, S. C.; Okabayashi, M.; Sun, Y. C.

    1996-02-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments.

  13. Computing in Qualitative Analysis: A Healthy Development?

    ERIC Educational Resources Information Center

    Richards, Lyn; Richards, Tom

    1991-01-01

    Discusses the potential impact of computers in qualitative health research. Describes the original goals, design, and implementation of NUDIST, a qualitative computing software. Argues for evaluation of the impact of computer techniques and for an opening of debate among program developers and users to address the purposes and power of computing…

  14. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  15. Additive manufacturing of tunable lenses

    NASA Astrophysics Data System (ADS)

    Schlichting, Katja; Novak, Tobias; Heinrich, Andreas

    2017-02-01

    Individual additive manufacturing of optical systems based on 3D Printing offers varied possibilities in design and usage. In addition to the additive manufacturing procedure, the usage of tunable lenses allows further advantages for intelligent optical systems. Our goal is to bring the advantages of additive manufacturing together with the huge potential of tunable lenses. We produced tunable lenses as a bundle without any further processing steps, like polishing. The lenses were designed and directly printed with a 3D Printer as a package. The design contains the membrane as an optical part as well as the mechanical parts of the lens, like the attachments for the sleeves which contain the oil. The dynamic optical lenses were filled with an oil. The focal length of the lenses changes due to a change of the radius of curvature. This change is caused by changing the pressure in the inside of the lens. In addition to that, we designed lenses with special structures to obtain different areas with an individual optical power. We want to discuss the huge potential of this technology for several applications. Further, an appropriate controlling system is needed. Wéll show the possibilities to control and regulate the optical power of the lenses. The lenses could be used for illumination tasks, and in the future, for individual measurement tasks. The main advantage is the individuality and the possibility to create an individual design which completely fulfills the requirements for any specific application.

  16. Theoretical and experimental investigation of design for multioptical-axis freeform progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Xiang, HuaZhong; Chen, JiaBi; Zhu, TianFen; Wei, YeFei; Fu, DongXiang

    2015-11-01

    A freeform progressive addition lens (PAL) provides a good solution to correct presbyopia and prevent juvenile myopia by distributing pupils' optical powers of distance zone, near zone, and intermediate zone and is more widely adopted in the present optometric study. However, there is still a lack of a single-optical-axis system for the design of a PAL. This paper focuses on the research for an approach for designing a freeform PAL. A multioptical-axis system based on real viewing conditions using the eyes is employed for the representation of the freeform surface. We filled small pupils in the intermediate zone as a progressive corridor and the distance- and near-vision portions were defined as the standard spherical surfaces delimited by quadratic curves. Three freeform PALs with a spherical surface as the front side and a freeform surface as the backside were designed. We demonstrate the fabrication and measurement technologies for the PAL surface using computer numerical control machine tools from Schneider Smart and a Visionix VM-2000 Lens Power Mapper. Surface power and astigmatic values were obtained. Preliminary results showed that the approach for the design and fabrication is helpful to advance the design procedure optimization and mass production of PALs in optometry.

  17. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  18. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate

  19. Operate a Nuclear Power Plant.

    ERIC Educational Resources Information Center

    Frimpter, Bonnie J.; And Others

    1983-01-01

    Describes classroom use of a computer program originally published in Creative Computing magazine. "The Nuclear Power Plant" (runs on Apple II with 48K memory) simulates the operating of a nuclear generating station, requiring students to make decisions as they assume the task of managing the plant. (JN)

  20. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  1. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    PubMed

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  2. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

  3. Chromatin Computation

    PubMed Central

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  4. Technopower and Technoppression: Some Abuses of Power and Control in Computer-Assisted Writing Environments (Computers and Controversy).

    ERIC Educational Resources Information Center

    Janangelo, Joseph

    1991-01-01

    Examines the exploitation of individuals that occurs within writing classrooms by those who organize computer systems. Discusses these abuses in three categories: teachers observing students, teachers observing teachers, and students observing students. Shows how computer-enhanced writing environments, if not designed carefully, can inhibit as…

  5. Quantum computation with indefinite causal structures

    NASA Astrophysics Data System (ADS)

    Araújo, Mateus; Guérin, Philippe Allard; Baumeler, ńmin

    2017-11-01

    One way to study the physical plausibility of closed timelike curves (CTCs) is to examine their computational power. This has been done for Deutschian CTCs (D-CTCs) and postselection CTCs (P-CTCs), with the result that they allow for the efficient solution of problems in PSPACE and PP, respectively. Since these are extremely powerful complexity classes, which are not expected to be solvable in reality, this can be taken as evidence that these models for CTCs are pathological. This problem is closely related to the nonlinearity of this models, which also allows, for example, cloning quantum states, in the case of D-CTCs, or distinguishing nonorthogonal quantum states, in the case of P-CTCs. In contrast, the process matrix formalism allows one to model indefinite causal structures in a linear way, getting rid of these effects and raising the possibility that its computational power is rather tame. In this paper, we show that process matrices correspond to a linear particular case of P-CTCs, and therefore that its computational power is upperbounded by that of PP. We show, furthermore, a family of processes that can violate causal inequalities but nevertheless can be simulated by a causally ordered quantum circuit with only a constant overhead, showing that indefinite causality is not necessarily hard to simulate.

  6. Research on spacecraft electrical power conversion

    NASA Technical Reports Server (NTRS)

    Wilson, T. G.

    1983-01-01

    The history of spacecraft electrical power conversion in literature, research and practice is reviewed. It is noted that the design techniques, analyses and understanding which were developed make today's contribution to power computers and communication installations. New applications which require more power, improved dynamic response, greater reliability, and lower cost are outlined. The switching mode approach in electronic power conditioning is discussed. Technical aspects of the research are summarized.

  7. Universal computer test stand (recommended computer test requirements). [for space shuttle computer evaluation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.

  8. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  9. Additive manufacturing of materials: Opportunities and challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babu, Sudarsanam Suresh; Love, Lonnie J.; Dehoff, Ryan R.

    Additive manufacturing (also known as 3D printing) is considered a disruptive technology for producing components with topologically optimized complex geometries as well as functionalities that are not achievable by traditional methods. The realization of the full potential of 3D printing is stifled by a lack of computational design tools, generic material feedstocks, techniques for monitoring thermomechanical processes under in situ conditions, and especially methods for minimizing anisotropic static and dynamic properties brought about by microstructural heterogeneity. In this paper, we discuss the role of interdisciplinary research involving robotics and automation, process control, multiscale characterization of microstructure and properties, and high-performancemore » computational tools to address each of these challenges. In addition, emerging pathways to scale up additive manufacturing of structural materials to large sizes (>1 m) and higher productivities (5–20 kg/h) while maintaining mechanical performance and geometrical flexibility are also discussed.« less

  10. Additive manufacturing of materials: Opportunities and challenges

    DOE PAGES

    Babu, Sudarsanam Suresh; Love, Lonnie J.; Dehoff, Ryan R.; ...

    2015-11-01

    Additive manufacturing (also known as 3D printing) is considered a disruptive technology for producing components with topologically optimized complex geometries as well as functionalities that are not achievable by traditional methods. The realization of the full potential of 3D printing is stifled by a lack of computational design tools, generic material feedstocks, techniques for monitoring thermomechanical processes under in situ conditions, and especially methods for minimizing anisotropic static and dynamic properties brought about by microstructural heterogeneity. In this paper, we discuss the role of interdisciplinary research involving robotics and automation, process control, multiscale characterization of microstructure and properties, and high-performancemore » computational tools to address each of these challenges. In addition, emerging pathways to scale up additive manufacturing of structural materials to large sizes (>1 m) and higher productivities (5–20 kg/h) while maintaining mechanical performance and geometrical flexibility are also discussed.« less

  11. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  12. Demystifying the GMAT: Computer-Based Testing Terms

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2012-01-01

    Computer-based testing can be a powerful means to make all aspects of test administration not only faster and more efficient, but also more accurate and more secure. While the Graduate Management Admission Test (GMAT) exam is a computer adaptive test, there are other approaches. This installment presents a primer of computer-based testing terms.

  13. Simulations of Low Power DIII-D Helicon Antenna Coupling

    NASA Astrophysics Data System (ADS)

    Smithe, David; Jenkins, Thomas

    2017-10-01

    We present an overview and initial progress for a new project to model coupling of the DIII-D Helicon Antenna. We lay the necessary computational groundwork for the modeling of both low-power and high power helicon antenna operation, by constructing numerical representations for both the antenna hardware and the DIII-D plasma. CAD files containing the detailed geometry of the low power antenna hardware are imported into the VSim software's FDTD plasma model. The plasma can be represented numerically by importing EQDSK or EFIT files. In addition, approximate analytic forms for the ensuing profiles and fields are constructed to facilitate parameter scans in the various regimes of anticipated antenna operation. To verify the accuracy of the numerical plasma and antenna representations, we will then run baseline simulations of low-power antenna operation, and verify that the predictions for loading, linear coupling, and mode partitioning (i.e. into helicon and slow modes) are consistent with the measurements from the low power helicon antenna experimental campaign, as well as with other independent models. Progress on these baseline simulations will be presented, and any inconsistencies and issues that arise during this process will be identified. Support provided by DOE Grant DE-SC0017843.

  14. Computer-aided modeling and prediction of performance of the modified Lundell class of alternators in space station solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Demerdash, Nabeel A. O.; Wang, Ren-Hong

    1988-01-01

    The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.

  15. EMTP; A powerful tool for analyzing power system transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, W.; Cotcher, D.; Ruiu, D.

    1990-07-01

    This paper reports on the electromagnetic transients program (EMTP), a general purpose computer program for simulating high-speed transient effects in electric power systems. The program features an extremely wide variety of modeling capabilities encompassing electromagnetic and electromechanical oscillations ranging in duration from microseconds to seconds. Examples of its use include switching and lightning surge analysis, insulation coordination, shaft torsional oscillations, ferroresonance, and HVDC converter control and operation. In the late 1960s Hermann Dommel developed the EMTP at Bonneville Power Administration (BPA), which considered the program to be the digital computer replacement for the transient network analyzer. The program initially comprisedmore » about 5000 lines of code, and was useful primarily for transmission line switching studies. As more uses for the program became apparent, BPA coordinated many improvements to the program. As the program grew in versatility and in size, it likewise became more unwieldy and difficult to use. One had to be an EMTP aficionado to take advantage of its capabilities.« less

  16. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  17. Continuous-variable quantum computing on encrypted data.

    PubMed

    Marshall, Kevin; Jacobsen, Christian S; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L

    2016-12-14

    The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks.

  18. Continuous-variable quantum computing on encrypted data

    NASA Astrophysics Data System (ADS)

    Marshall, Kevin; Jacobsen, Christian S.; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L.

    2016-12-01

    The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks.

  19. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.

  20. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.