Gammaitoni, Luca; Chiuchiú, D; Madami, M; Carlotti, G
2015-06-05
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
NASA Astrophysics Data System (ADS)
Gammaitoni, Luca; Chiuchiú, D.; Madami, M.; Carlotti, G.
2015-06-01
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
Margin and sensitivity methods for security analysis of electric power systems
NASA Astrophysics Data System (ADS)
Greene, Scott L.
Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.
Markov chains: computing limit existence and approximations with DNA.
Cardona, M; Colomer, M A; Conde, J; Miret, J M; Miró, J; Zaragoza, A
2005-09-01
We present two algorithms to perform computations over Markov chains. The first one determines whether the sequence of powers of the transition matrix of a Markov chain converges or not to a limit matrix. If it does converge, the second algorithm enables us to estimate this limit. The combination of these algorithms allows the computation of a limit using DNA computing. In this sense, we have encoded the states and the transition probabilities using strands of DNA for generating paths of the Markov chain.
NASA Technical Reports Server (NTRS)
Jefferies, K. S.; Tew, R. C.
1974-01-01
A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.
Mechanical Computing Redux: Limitations at the Nanoscale
NASA Astrophysics Data System (ADS)
Liu, Tsu-Jae King
2014-03-01
Technology solutions for overcoming the energy efficiency limits of nanoscale complementary metal oxide semiconductor (CMOS) technology ultimately will be needed in order to address the growing issue of integrated-circuit chip power density. Off-state leakage current sets a fundamental lower limit in energy per operation for any voltage-level-based digital logic implemented with transistors (CMOS and beyond), which leads to practical limits for device density (i.e. cost) and operating frequency (i.e. system performance). Mechanical switches have zero off-state leakag and hence can overcome this fundamental limit. Contact adhesive force sets a lower limit for the switching energy of a mechanical switch, however, and also directly impacts its performance. This paper will review recent progress toward the development of nano-electro-mechanical relay technology and discuss remaining challenges for realizing the promise of mechanical computing for ultra-low-power computing. Supported by the Center for Energy Efficient Electronics Science (NSF Award 0939514).
Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations
2007-08-31
very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power
Data centers as dispatchable loads to harness stranded power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kibaek; Yang, Fan; Zavala, Victor M.
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Data centers as dispatchable loads to harness stranded power
Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...
2016-07-20
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Cloud Computing and the Power to Choose
ERIC Educational Resources Information Center
Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo
2010-01-01
Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-01-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686
Parallel, distributed and GPU computing technologies in single-particle electron microscopy.
Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-07-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.
Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing
NASA Astrophysics Data System (ADS)
Kim, Mooseop; Ryou, Jaecheol
The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
25. Perimeter acquisition radar building room #2M4, (mezzanine), power supply ...
25. Perimeter acquisition radar building room #2M4, (mezzanine), power supply room; computer power supply on left and water flow on right. This room is directly below data processing area (room #318). Sign on right reads: High purity water digital rack - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND
Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
NASA Astrophysics Data System (ADS)
Chen, Xi Lin; De Santis, Valerio; Esai Umenei, Aghuinyue
2014-07-01
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Re-Form: FPGA-Powered True Codesign Flow for High-Performance Computing In The Post-Moore Era
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappello, Franck; Yoshii, Kazutomo; Finkel, Hal
Multicore scaling will end soon because of practical power limits. Dark silicon is becoming a major issue even more than the end of Moore’s law. In the post-Moore era, the energy efficiency of computing will be a major concern. FPGAs could be a key to maximizing the energy efficiency. In this paper we address severe challenges in the adoption of FPGA in HPC and describe “Re-form,” an FPGA-powered codesign flow.
Factors affecting frequency and orbit utilization by high power transmission satellite systems.
NASA Technical Reports Server (NTRS)
Kuhns, P. W.; Miller, E. F.; O'Malley, T. A.
1972-01-01
The factors affecting the sharing of the geostationary orbit by high power (primarily television) satellite systems having the same or adjacent coverage areas and by satellites occupying the same orbit segment are examined and examples using the results of computer computations are given. The factors considered include: required protection ratio, receiver antenna patterns, relative transmitter power, transmitter antenna patterns, satellite grouping, and coverage pattern overlap. The results presented indicate the limits of system characteristics and orbit deployment which can result from mixing systems.
Factors affecting frequency and orbit utilization by high power transmission satellite systems
NASA Technical Reports Server (NTRS)
Kuhns, P. W.; Miller, E. F.; Malley, T. A.
1972-01-01
The factors affecting the sharing of the geostationary orbit by high power (primarily television) satellite systems having the same or adjacent coverage areas and by satellites occupying the same orbit segment are examined and examples using the results of computer computations are given. The factors considered include: required protection ratio, receiver antenna patterns, relative transmitter power, transmitter antenna patterns, satellite grouping, and coverage pattern overlap. The results presented indicated the limits of system characteristics and orbit deployment which can result from mixing systems.
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2003-01-01
The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
Integrating Commercial Off-The-Shelf (COTS) graphics and extended memory packages with CLIPS
NASA Technical Reports Server (NTRS)
Callegari, Andres C.
1990-01-01
This paper addresses the question of how to mix CLIPS with graphics and how to overcome PC's memory limitations by using the extended memory available in the computer. By adding graphics and extended memory capabilities, CLIPS can be converted into a complete and powerful system development tool, on the other most economical and popular computer platform. New models of PCs have amazing processing capabilities and graphic resolutions that cannot be ignored and should be used to the fullest of their resources. CLIPS is a powerful expert system development tool, but it cannot be complete without the support of a graphics package needed to create user interfaces and general purpose graphics, or without enough memory to handle large knowledge bases. Now, a well known limitation on the PC's is the usage of real memory which limits CLIPS to use only 640 Kb of real memory, but now that problem can be solved by developing a version of CLIPS that uses extended memory. The user has access of up to 16 MB of memory on 80286 based computers and, practically, all the available memory (4 GB) on computers that use the 80386 processor. So if we give CLIPS a self-configuring graphics package that will automatically detect the graphics hardware and pointing device present in the computer, and we add the availability of the extended memory that exists in the computer (with no special hardware needed), the user will be able to create more powerful systems at a fraction of the cost and on the most popular, portable, and economic platform available such as the PC platform.
The New Feedback Control System of RFX-mod Based on the MARTe Real-Time Framework
NASA Astrophysics Data System (ADS)
Manduchi, G.; Luchetta, A.; Soppelsa, A.; Taliercio, C.
2014-06-01
A real-time system has been successfully used since 2004 in the RFX-mod nuclear fusion experiment to control the position of the plasma and its Magneto Hydrodynamic (MHD) modes. However, its latency and the limited computation power of the used processors prevented the usage of more aggressive control algorithms. Therefore a new hardware and software architecture has been designed to overcome such limitations and to provide a shorter latency and a much increased computation power. The new system is based on a Linux multi-core server and uses MARTe, a framework for real-time control which is gaining interest in the fusion community.
D'Amico, E J; Neilands, T B; Zambarano, R
2001-11-01
Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed.
Transistor analogs of emergent iono-neuronal dynamics.
Rachmuth, Guy; Poon, Chi-Sang
2008-06-01
Neuromorphic analog metal-oxide-silicon (MOS) transistor circuits promise compact, low-power, and high-speed emulations of iono-neuronal dynamics orders-of-magnitude faster than digital simulation. However, their inherently limited input voltage dynamic range vs power consumption and silicon die area tradeoffs makes them highly sensitive to transistor mismatch due to fabrication inaccuracy, device noise, and other nonidealities. This limitation precludes robust analog very-large-scale-integration (aVLSI) circuits implementation of emergent iono-neuronal dynamics computations beyond simple spiking with limited ion channel dynamics. Here we present versatile neuromorphic analog building-block circuits that afford near-maximum voltage dynamic range operating within the low-power MOS transistor weak-inversion regime which is ideal for aVLSI implementation or implantable biomimetic device applications. The fabricated microchip allowed robust realization of dynamic iono-neuronal computations such as coincidence detection of presynaptic spikes or pre- and postsynaptic activities. As a critical performance benchmark, the high-speed and highly interactive iono-neuronal simulation capability on-chip enabled our prompt discovery of a minimal model of chaotic pacemaker bursting, an emergent iono-neuronal behavior of fundamental biological significance which has hitherto defied experimental testing or computational exploration via conventional digital or analog simulations. These compact and power-efficient transistor analogs of emergent iono-neuronal dynamics open new avenues for next-generation neuromorphic, neuroprosthetic, and brain-machine interface applications.
Classical multiparty computation using quantum resources
NASA Astrophysics Data System (ADS)
Clementi, Marco; Pappa, Anna; Eckstein, Andreas; Walmsley, Ian A.; Kashefi, Elham; Barz, Stefanie
2017-12-01
In this work, we demonstrate a way to perform classical multiparty computing among parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a nonlinear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical xor gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits that allows four clients to compute a specific example of a multiparty function, the pairwise and.
POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models
Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.
2014-01-01
The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516
Data multiplexing in radio interferometric calibration
NASA Astrophysics Data System (ADS)
Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.
2018-03-01
New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.
Quantum machine learning: a classical perspective
NASA Astrophysics Data System (ADS)
Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard
2018-01-01
Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.
Quantum machine learning: a classical perspective
Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard
2018-01-01
Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508
Quantum machine learning: a classical perspective.
Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard
2018-01-01
Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.
Application of Blind Quantum Computation to Two-Party Quantum Computation
NASA Astrophysics Data System (ADS)
Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong
2018-06-01
Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.
Application of Blind Quantum Computation to Two-Party Quantum Computation
NASA Astrophysics Data System (ADS)
Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong
2018-03-01
Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.
Multimedia CALLware: The Developer's Responsibility.
ERIC Educational Resources Information Center
Dodigovic, Marina
The early computer-assisted-language-learning (CALL) programs were silent and mostly limited to screen or printer supported written text as the prevailing communication resource. The advent of powerful graphics, sound and video combined with AI-based parsers and sound recognition devices gradually turned the computer into a rather anthropomorphic…
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
NASA Astrophysics Data System (ADS)
Holden, Jacob R.
Descending maple seeds generate lift to slow their fall and remain aloft in a blowing wind; have the wings of these seeds evolved to descend as slowly as possible? A unique energy balance equation, experimental data, and computational fluid dynamics simulations have all been developed to explore this question from a turbomachinery perspective. The computational fluid dynamics in this work is the first to be performed in the relative reference frame. Maple seed performance has been analyzed for the first time based on principles of wind turbine analysis. Application of the Betz Limit and one-dimensional momentum theory allowed for empirical and computational power and thrust coefficients to be computed for maple seeds. It has been determined that the investigated species of maple seeds perform near the Betz limit for power conversion and thrust coefficient. The power coefficient for a maple seed is found to be in the range of 48-54% and the thrust coefficient in the range of 66-84%. From Betz theory, the stream tube area expansion of the maple seed is necessary for power extraction. Further investigation of computational solutions and mechanical analysis find three key reasons for high maple seed performance. First, the area expansion is driven by maple seed lift generation changing the fluid momentum and requiring area to increase. Second, radial flow along the seed surface is promoted by a sustained leading edge vortex that centrifuges low momentum fluid outward. Finally, the area expansion is also driven by the spanwise area variation of the maple seed imparting a radial force on the flow. These mechanisms result in a highly effective device for the purpose of seed dispersal. However, the maple seed also provides insight into fundamental questions about how turbines can most effectively change the momentum of moving fluids in order to extract useful power or dissipate kinetic energy.
GLIDE: a grid-based light-weight infrastructure for data-intensive environments
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.
2005-01-01
The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.
NASA Astrophysics Data System (ADS)
Wei, Tzu-Chieh; Huang, Ching-Yu
2017-09-01
Recent progress in the characterization of gapped quantum phases has also triggered the search for a universal resource for quantum computation in symmetric gapped phases. Prior works in one dimension suggest that it is a feature more common than previously thought, in that nontrivial one-dimensional symmetry-protected topological (SPT) phases provide quantum computational power characterized by the algebraic structure defining these phases. Progress in two and higher dimensions so far has been limited to special fixed points. Here we provide two families of two-dimensional Z2 symmetric wave functions such that there exists a finite region of the parameter in the SPT phases that supports universal quantum computation. The quantum computational power appears to lose its universality at the boundary between the SPT and the symmetry-breaking phases.
Increasing power-law range in avalanche amplitude and energy distributions
NASA Astrophysics Data System (ADS)
Navas-Portella, Víctor; Serra, Isabel; Corral, Álvaro; Vives, Eduard
2018-02-01
Power-law-type probability density functions spanning several orders of magnitude are found for different avalanche properties. We propose a methodology to overcome empirical constraints that limit the range of truncated power-law distributions. By considering catalogs of events that cover different observation windows, the maximum likelihood estimation of a global power-law exponent is computed. This methodology is applied to amplitude and energy distributions of acoustic emission avalanches in failure-under-compression experiments of a nanoporous silica glass, finding in some cases global exponents in an unprecedented broad range: 4.5 decades for amplitudes and 9.5 decades for energies. In the latter case, however, strict statistical analysis suggests experimental limitations might alter the power-law behavior.
Dynamic power scheduling system for JPEG2000 delivery over wireless networks
NASA Astrophysics Data System (ADS)
Martina, Maurizio; Vacca, Fabrizio
2003-06-01
Third generation mobile terminals diffusion is encouraging the development of new multimedia based applications. The reliable transmission of audiovisual content will gain major interest being one of the most valuable services. Nevertheless, mobile scenario is severely power constrained: high compression ratios and refined energy management strategies are highly advisable. JPEG2000 as the source encoding stage assures excellent performance with extremely good visual quality. However the limited power budged imposes to limit the computational effort in order to save as much power as possible. Starting from an error prone environment, as the wireless one, high error-resilience features need to be employed. This paper tries to investigate the trade-off between quality and power in such a challenging environment.
Increasing power-law range in avalanche amplitude and energy distributions.
Navas-Portella, Víctor; Serra, Isabel; Corral, Álvaro; Vives, Eduard
2018-02-01
Power-law-type probability density functions spanning several orders of magnitude are found for different avalanche properties. We propose a methodology to overcome empirical constraints that limit the range of truncated power-law distributions. By considering catalogs of events that cover different observation windows, the maximum likelihood estimation of a global power-law exponent is computed. This methodology is applied to amplitude and energy distributions of acoustic emission avalanches in failure-under-compression experiments of a nanoporous silica glass, finding in some cases global exponents in an unprecedented broad range: 4.5 decades for amplitudes and 9.5 decades for energies. In the latter case, however, strict statistical analysis suggests experimental limitations might alter the power-law behavior.
Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing
van der Velde, Frank
2016-01-01
In situ concept-based computing is based on the notion that conceptual representations in the human brain are “in situ.” In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired “blackboards.” The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing. PMID:27242504
Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing.
van der Velde, Frank
2016-01-01
In situ concept-based computing is based on the notion that conceptual representations in the human brain are "in situ." In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired "blackboards." The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing.
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less
Kaye, Stephen B
2009-04-01
To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.
Wei, Yawei; Venayagamoorthy, Ganesh Kumar
2017-09-01
To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
NASA Astrophysics Data System (ADS)
Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen
2018-04-01
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
Ubiquitous Presenter: A Tablet PC-Based System to Support Instructors and Students
ERIC Educational Resources Information Center
Price, Edward; Simon, Beth
2009-01-01
Digital lecturing systems (computer and projector, often with PowerPoint) offer physics instructors the ability to incorporate graphics and the power to share and reuse materials. But these systems do a poor job of supporting interaction in the classroom. For instance, with digital presentation systems, instructors have limited ability to…
Historical Note: The Past Thirty Years in Information Retrieval.
ERIC Educational Resources Information Center
Salton, Gerard
1987-01-01
Briefly reviews early work in documentation and text processing, and predictions that were made about the creative role of computers in information retrieval. An attempt is made to explain why these predictions were not fulfilled and conclusions are drawn regarding the limits of computer power in text retrieval applications. (Author/CLB)
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
Creation of Power Reserves Under the Market Economy Conditions
NASA Astrophysics Data System (ADS)
Mahnitko, A.; Gerhards, J.; Lomane, T.; Ribakov, S.
2008-09-01
The main task of the control over an electric power system (EPS) is to ensure reliable power supply at the least cost. In this case, requirements to the electric power quality, power supply reliability and cost limitations on the energy resources must be observed. The available power reserve in an EPS is the necessary condition to keep it in operation with maintenance of normal operating variables (frequency, node voltage, power flows via the transmission lines, etc.). The authors examine possibilities to create power reserves that could be offered for sale by the electric power producer. They consider a procedure of price formation for the power reserves and propose a relevant mathematical model for a united EPS, the initial data being the fuel-cost functions for individual systems, technological limitations on the active power generation and consumers' load. As the criterion of optimization the maximum profit for the producer is taken. The model is exemplified by a concentrated EPS. The computations have been performed using the MATLAB program.
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
NASA Astrophysics Data System (ADS)
Aharonov, Dorit
In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.
Automation technology for aerospace power management
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1982-01-01
The growing size and complexity of spacecraft power systems coupled with limited space/ground communications necessitate increasingly automated onboard control systems. Research in computer science, particularly artificial intelligence has developed methods and techniques for constructing man-machine systems with problem-solving expertise in limited domains which may contribute to the automation of power systems. Since these systems perform tasks which are typically performed by human experts they have become known as Expert Systems. A review of the current state of the art in expert systems technology is presented, and potential applications in power systems management are considered. It is concluded that expert systems appear to have significant potential for improving the productivity of operations personnel in aerospace applications, and in automating the control of many aerospace systems.
Implanted Miniaturized Antenna for Brain Computer Interface Applications: Analysis and Design
Zhao, Yujuan; Rennaker, Robert L.; Hutchens, Chris; Ibrahim, Tamer S.
2014-01-01
Implantable Brain Computer Interfaces (BCIs) are designed to provide real-time control signals for prosthetic devices, study brain function, and/or restore sensory information lost as a result of injury or disease. Using Radio Frequency (RF) to wirelessly power a BCI could widely extend the number of applications and increase chronic in-vivo viability. However, due to the limited size and the electromagnetic loss of human brain tissues, implanted miniaturized antennas suffer low radiation efficiency. This work presents simulations, analysis and designs of implanted antennas for a wireless implantable RF-powered brain computer interface application. The results show that thin (on the order of 100 micrometers thickness) biocompatible insulating layers can significantly impact the antenna performance. The proper selection of the dielectric properties of the biocompatible insulating layers and the implantation position inside human brain tissues can facilitate efficient RF power reception by the implanted antenna. While the results show that the effects of the human head shape on implanted antenna performance is somewhat negligible, the constitutive properties of the brain tissues surrounding the implanted antenna can significantly impact the electrical characteristics (input impedance, and operational frequency) of the implanted antenna. Three miniaturized antenna designs are simulated and demonstrate that maximum RF power of up to 1.8 milli-Watts can be received at 2 GHz when the antenna implanted around the dura, without violating the Specific Absorption Rate (SAR) limits. PMID:25079941
Menzies, Kevin
2014-08-13
The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
ERIC Educational Resources Information Center
Xu, Q.; Lai, L. L.; Tse, N. C. F.; Ichiyanagi, K.
2011-01-01
An interactive computer-based learning tool with multiple sessions is proposed in this paper, which teaches students to think and helps them recognize the merits and limitations of simulation tools so as to improve their practical abilities in electrical circuit simulation based on the case of a power converter with progressive problems. The…
Control of wind turbine generators connected to power systems
NASA Technical Reports Server (NTRS)
Hwang, H. H.; Mozeico, H. V.; Gilbert, L. J.
1978-01-01
A unique simulation model based on a Mode-O wind turbine is developed for simulating both speed and power control. An analytical representation for a wind turbine that employs blade pitch angle feedback control is presented, and a mathematical model is formulated. For Mode-O serving as a practical case study, results of a computer simulation of the model as applied to the problems of synchronization and dynamic stability are provided. It is shown that the speed and output of a wind turbine can be satisfactorily controlled within reasonable limits by employing the existing blade pitch control system under specified conditions. For power control, an additional excitation control is required so that the terminal voltage, output power factor, and armature current can be held within narrow limits. As a result, the variation of torque angle is limited even if speed control is not implemented simultaneously with power control. Design features of the ERDA/NASA 100-kW Mode-O wind turbine are included.
Program optimizations: The interplay between power, performance, and energy
Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...
2016-05-16
Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less
1987-03-01
information and work in a completely secure environment. Information used with today’s C3I systems must be protected. To better understand the role of...and security was of minor concern. The user either worked on his own behalf or as a programmer for someone else. The computer power was limited. With...Although the modules may be of the same classification level, the manager may want to limit each team’s access to the module on which they are working
RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems.
Rodriguez Arreola, Alberto; Balsamo, Domenico; Merrett, Geoff V; Weddell, Alex S
2018-01-10
Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor's state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I 2 C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches.
Analysis of Application Power and Schedule Composition in a High Performance Computing Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb
As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less
Power-constrained supercomputing
NASA Astrophysics Data System (ADS)
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less
LabVIEW Serial Driver Software for an Electronic Load
NASA Technical Reports Server (NTRS)
Scullin, Vincent; Garcia, Christopher
2003-01-01
A LabVIEW-language computer program enables monitoring and control of a Transistor Devices, Inc., Dynaload WCL232 (or equivalent) electronic load via an RS-232 serial communication link between the electronic load and a remote personal computer. (The electronic load can operate at constant voltage, current, power consumption, or resistance.) The program generates a graphical user interface (GUI) at the computer that looks and acts like the front panel of the electronic load. Once the electronic load has been placed in remote-control mode, this program first queries the electronic load for the present values of all its operational and limit settings, and then drops into a cycle in which it reports the instantaneous voltage, current, and power values in displays that resemble those on the electronic load while monitoring the GUI images of pushbuttons for control actions by the user. By means of the pushbutton images and associated prompts, the user can perform such operations as changing limit values, the operating mode, or the set point. The benefit of this software is that it relieves the user of the need to learn one method for operating the electronic load locally and another method for operating it remotely via a personal computer.
CFD research and systems in Kawasaki Heavy Industries and its future prospects
NASA Astrophysics Data System (ADS)
Hiraoka, Koichi
1990-09-01
KHI Computational Fluid Dynamics (CFD) system is composed of VP100 computer and 2-D and 3-D Euler and/or Navier-Stokes (NS) analysis softwares. For KHI, this system has become a very powerful aerodynamic tool together with the Kawasaki 1 m Transonic Wind Tunnel. The 2-D Euler/NS software, developed in-house, is fully automated, requires no special skill, and was successfully applied to the design of YXX high lift devices and SST supersonic inlet, etc. The 3-D Euler/NS software, developed under joint research with NAL, has an interactively operated Multi-Block type grid generator and can effectively generate grids around complex airplane shapes. Due to the main memory size limitation, 3-D analysis of relatively simple shape, such as SST wing-body, was computed in-house on VP100, otherwise, such as detailed 3-D analyses of ASUKA and HOPE, were computed on NAL VP400, which is 10 times more powerful than VP100, under KHI-NAL joint research. These analysis results have very good correlation with experimental results. However, the present CFD system is less productive than wind tunnel and has applicability limitations.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Computer Aided Drug Design: Success and Limitations.
Baig, Mohammad Hassan; Ahmad, Khurshid; Roy, Sudeep; Ashraf, Jalaluddin Mohammad; Adil, Mohd; Siddiqui, Mohammad Haris; Khan, Saif; Kamal, Mohammad Amjad; Provazník, Ivo; Choi, Inho
2016-01-01
Over the last few decades, computer-aided drug design has emerged as a powerful technique playing a crucial role in the development of new drug molecules. Structure-based drug design and ligand-based drug design are two methods commonly used in computer-aided drug design. In this article, we discuss the theory behind both methods, as well as their successful applications and limitations. To accomplish this, we reviewed structure based and ligand based virtual screening processes. Molecular dynamics simulation, which has become one of the most influential tool for prediction of the conformation of small molecules and changes in their conformation within the biological target, has also been taken into account. Finally, we discuss the principles and concepts of molecular docking, pharmacophores and other methods used in computer-aided drug design.
Distortion outage minimization in Nakagami fading using limited feedback
NASA Astrophysics Data System (ADS)
Wang, Chih-Hong; Dey, Subhrakanti
2011-12-01
We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.
Gambini, R; Pullin, J
2000-12-18
We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.
Computing at the speed limit (supercomputers)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernhard, R.
1982-07-01
The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less
NASA Technical Reports Server (NTRS)
Myers, Roger M.
1991-01-01
Inhouse magnetoplasmadynamic (MPD) thruster technology is discussed. The study focussed on steady state thrusters at powers of less than 1 MW. Performance measurement and diagnostics technologies were developed for high power thrusters. Also developed was a MPD computer code. The stated goals of the program are to establish: performance and life limitation; influence of applied fields; propellant effects; and scaling laws. The presentation is mostly through graphs and charts.
A note on the self-similar solutions to the spontaneous fragmentation equation
NASA Astrophysics Data System (ADS)
Breschi, Giancarlo; Fontelos, Marco A.
2017-05-01
We provide a method to compute self-similar solutions for various fragmentation equations and use it to compute their asymptotic behaviours. Our procedure is applied to specific cases: (i) the case of mitosis, where fragmentation results into two identical fragments, (ii) fragmentation limited to the formation of sufficiently large fragments, and (iii) processes with fragmentation kernel presenting a power-like behaviour.
2013-01-01
Cracking in asphalt pavement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Figure 2. 2D...metallic binder, figure 1(b)), particulate energetic materials (explosive crystalline grains with polymeric binder, figure 1(c)), asphalt pavement (stone...explosive HMX grains and at grain-matrix interfaces (2). (d) Cracking in asphalt pavement . 2 (i) it is limited by current computing power (even
Modular and Reusable Power System Design for the BRRISON Balloon Telescope
NASA Astrophysics Data System (ADS)
Truesdale, Nicholas A.
High altitude balloons are emerging as low-cost alternatives to orbital satellites in the field of telescopic observation. The near-space environment of balloons allows optics to perform near their diffraction limit. In practice, this implies that a telescope similar to the Hubble Space Telescope could be flown for a cost of tens of millions as opposed to billions. While highly feasible, the design of a balloon telescope to rival Hubble is limited by funding. Until a prototype is proven and more support for balloon science is gained, projects remain limited in both hardware costs and man hours. Thus, to effectively create and support balloon payloads, engineering designs must be efficient, modular, and if possible reusable. This thesis focuses specifically on a modular power system design for the BRRISON comet-observing balloon telescope. Time- and cost-saving techniques are developed that can be used for future missions. A modular design process is achieved through the development of individual circuit elements that span a wide range of capabilities. Circuits for power conversion, switching and sensing are designed to be combined in any configuration. These include DC-DC regulators, MOSFET drivers for switching, isolated switches, current sensors and voltage sensing ADCs. Emphasis is also given to commercially available hardware. Pre-fabricated DC-DC converters and an Arduino microcontroller simplify the design process and offer proven, cost-effective performance. The design of the BRRISON power system is developed from these low-level circuits elements. A board for main power distribution supports the majority of flight electronics, and is extensible to additional hardware in future applications. An ATX computer power supply is developed, allowing the use of a commercial ATX motherboard as the flight computer. The addition of new capabilities is explored in the form of a heater control board. Finally, the power system as a whole is described, and its overall performance analyzed. The success of the BRRISON power system during testing and flight proves its utility, both for BRRISON and for future balloon telescopes.
NASA Technical Reports Server (NTRS)
Webb, J. A., Jr.; Mehmed, O.; Lorenzo, C. F.
1980-01-01
An airflow valve and its electrohydraulic actuation servosystem are described. The servosystem uses a high-power, single-stage servovalve to obtain a dynamic response beyond that of systems designed with conventional two-stage servovalves. The electrohydraulic servosystem is analyzed and the limitations imposed on system performance by such nonlinearities as signal saturations and power limitations are discussed. Descriptions of the mechanical design concepts and developmental considerations are included. Dynamic data, in the form of sweep-frequency test results, are presented and comparison with analytical results obtained with an analog computer model is made.
NASA Astrophysics Data System (ADS)
Miret, Josep M.; Sebé, Francesc
Low-cost devices are the key component of several applications: RFID tags permit an automated supply chain management while smart cards are a secure means of storing cryptographic keys required for remote and secure authentication in e-commerce and e-government applications. These devices must be cheap in order to permit their cost-effective massive manufacturing and deployment. Unfortunately, their low cost limits their computational power. Other devices such as nodes of sensor networks suffer from an additional constraint, namely, their limited battery life. Secure applications designed for these devices cannot make use of classical cryptographic primitives designed for full-fledged computers.
Qin, Zhongyuan; Zhang, Xinshuai; Feng, Kerong; Zhang, Qunfang; Huang, Jie
2014-01-01
With the rapid development and widespread adoption of wireless sensor networks (WSNs), security has become an increasingly prominent problem. How to establish a session key in node communication is a challenging task for WSNs. Considering the limitations in WSNs, such as low computing capacity, small memory, power supply limitations and price, we propose an efficient identity-based key management (IBKM) scheme, which exploits the Bloom filter to authenticate the communication sensor node with storage efficiency. The security analysis shows that IBKM can prevent several attacks effectively with acceptable computation and communication overhead. PMID:25264955
NASA Technical Reports Server (NTRS)
Moravec, Hans
1993-01-01
Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.
Quantum and classical dynamics in adiabatic computation
NASA Astrophysics Data System (ADS)
Crowley, P. J. D.; Äńurić, T.; Vinci, W.; Warburton, P. A.; Green, A. G.
2014-10-01
Adiabatic transport provides a powerful way to manipulate quantum states. By preparing a system in a readily initialized state and then slowly changing its Hamiltonian, one may achieve quantum states that would otherwise be inaccessible. Moreover, a judicious choice of final Hamiltonian whose ground state encodes the solution to a problem allows adiabatic transport to be used for universal quantum computation. However, the dephasing effects of the environment limit the quantum correlations that an open system can support and degrade the power of such adiabatic computation. We quantify this effect by allowing the system to evolve over a restricted set of quantum states, providing a link between physically inspired classical optimization algorithms and quantum adiabatic optimization. This perspective allows us to develop benchmarks to bound the quantum correlations harnessed by an adiabatic computation. We apply these to the D-Wave Vesuvius machine with revealing—though inconclusive—results.
NASA Astrophysics Data System (ADS)
Moravec, Hans
1993-12-01
Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.
Towards Scalable Graph Computation on Mobile Devices.
Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng
2014-10-01
Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.
Towards Scalable Graph Computation on Mobile Devices
Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng
2015-01-01
Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564
RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems
Rodriguez Arreola, Alberto; Balsamo, Domenico
2018-01-01
Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor’s state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I2C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches. PMID:29320441
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
In this paper, a short-term load forecasting approach based network reconfiguration is proposed in a parallel manner. Specifically, a support vector regression (SVR) based short-term load forecasting approach is designed to provide an accurate load prediction and benefit the network reconfiguration. Because of the nonconvexity of the three-phase balanced optimal power flow, a second-order cone program (SOCP) based approach is used to relax the optimal power flow problem. Then, the alternating direction method of multipliers (ADMM) is used to compute the optimal power flow in distributed manner. Considering the limited number of the switches and the increasing computation capability, themore » proposed network reconfiguration is solved in a parallel way. The numerical results demonstrate the feasible and effectiveness of the proposed approach.« less
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
A malicious pattern detection engine for embedded security systems in the Internet of Things.
Oh, Doohwan; Kim, Deokho; Ro, Won Woo
2014-12-16
With the emergence of the Internet of Things (IoT), a large number of physical objects in daily life have been aggressively connected to the Internet. As the number of objects connected to networks increases, the security systems face a critical challenge due to the global connectivity and accessibility of the IoT. However, it is difficult to adapt traditional security systems to the objects in the IoT, because of their limited computing power and memory size. In light of this, we present a lightweight security system that uses a novel malicious pattern-matching engine. We limit the memory usage of the proposed system in order to make it work on resource-constrained devices. To mitigate performance degradation due to limitations of computation power and memory, we propose two novel techniques, auxiliary shifting and early decision. Through both techniques, we can efficiently reduce the number of matching operations on resource-constrained systems. Experiments and performance analyses show that our proposed system achieves a maximum speedup of 2.14 with an IoT object and provides scalable performance for a large number of patterns.
Image processing for navigation on a mobile embedded platform
NASA Astrophysics Data System (ADS)
Preuss, Thomas; Gentsch, Lars; Rambow, Mark
2006-02-01
Mobile computing devices such as PDAs or cellular phones may act as "Personal Multimedia Exchanges", but they are limited in their processing power as well as in their connectivity. Sensors as well as cellular phones and PDAs are able to gather multimedia data, e. g. images, but leak computing power to process that data on their own. Therefore, it is necessary, that these devices connect to devices with more performance, which provide e.g. image processing services. In this paper, a generic approach is presented that connects different kinds of clients with each other and allows them to interact with more powerful devices. This architecture, called BOSPORUS, represents a communication framework for dynamic peer-to-peer computing. Each peer offers and uses services in this network and communicates loosely coupled and asynchronously with the others. These features make BOSPORUS a service oriented network architecture (SONA). A mobile embedded system, which uses external services for image processing based on the BOSPORUS Framework is shown as an application of the BOSPORUS framework.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
Utilization of recently developed codes for high power Brayton and Rankine cycle power systems
NASA Technical Reports Server (NTRS)
Doherty, Michael P.
1993-01-01
Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.
Risk in the Clouds?: Security Issues Facing Government Use of Cloud Computing
NASA Astrophysics Data System (ADS)
Wyld, David C.
Cloud computing is poised to become one of the most important and fundamental shifts in how computing is consumed and used. Forecasts show that government will play a lead role in adopting cloud computing - for data storage, applications, and processing power, as IT executives seek to maximize their returns on limited procurement budgets in these challenging economic times. After an overview of the cloud computing concept, this article explores the security issues facing public sector use of cloud computing and looks to the risk and benefits of shifting to cloud-based models. It concludes with an analysis of the challenges that lie ahead for government use of cloud resources.
Computational protein design-the next generation tool to expand synthetic biology applications.
Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel
2018-05-02
One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.
Reliable computation from contextual correlations
NASA Astrophysics Data System (ADS)
Oestereich, André L.; Galvão, Ernesto F.
2017-12-01
An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.
Solution techniques for transient stability-constrained optimal power flow – Part II
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu; ...
2017-06-28
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Solution techniques for transient stability-constrained optimal power flow – Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
A computer controlled power tool for the servicing of the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Richards, Paul W.; Konkel, Carl; Smith, Chris; Brown, Lee; Wagner, Ken
1996-01-01
The Hubble Space Telescope (HST) Pistol Grip Tool (PGT) is a self-contained, microprocessor controlled, battery-powered, 3/8-inch-drive hand-held tool. The PGT is also a non-powered ratchet wrench. This tool will be used by astronauts during Extravehicular Activity (EVA) to apply torque to the HST and HST Servicing Support Equipment mechanical interfaces and fasteners. Numerous torque, speed, and turn or angle limits are programmed into the PGT for use during various missions. Batteries are replaceable during ground operations, Intravehicular Activities, and EVA's.
NASA Astrophysics Data System (ADS)
Kashansky, Vladislav V.; Kaftannikov, Igor L.
2018-02-01
Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.
1999-01-01
We are on the path to meet the major challenges ahead for TCAD (technology computer aided design). The emerging computational grid will ultimately solve the challenge of limited computational power. The Modular TCAD Framework will solve the TCAD software challenge once TCAD software developers realize that there is no other way to meet industry's needs. The modular TCAD framework (MTF) also provides the ideal platform for solving the TCAD model challenge by rapid implementation of models in a partial differential solver.
Additive Manufacturing of a Microbial Fuel Cell—A detailed study
Calignano, Flaviana; Tommasi, Tonia; Manfredi, Diego; Chiolerio, Alessandro
2015-01-01
In contemporary society we observe an everlasting permeation of electron devices, smartphones, portable computing tools. The tiniest living organisms on Earth could become the key to address this challenge: energy generation by bacterial processes from renewable stocks/waste through devices such as microbial fuel cells (MFCs). However, the application of this solution was limited by a moderately low efficiency. We explored the limits, if any, of additive manufacturing (AM) technology to fabricate a fully AM-based powering device, exploiting low density, open porosities able to host the microbes, systems easy to fuel continuously and to run safely. We obtained an optimal energy recovery close to 3 kWh m−3 per day that can power sensors and low-power appliances, allowing data processing and transmission from remote/harsh environments. PMID:26611142
Additive Manufacturing of a Microbial Fuel Cell—A detailed study
NASA Astrophysics Data System (ADS)
Calignano, Flaviana; Tommasi, Tonia; Manfredi, Diego; Chiolerio, Alessandro
2015-11-01
In contemporary society we observe an everlasting permeation of electron devices, smartphones, portable computing tools. The tiniest living organisms on Earth could become the key to address this challenge: energy generation by bacterial processes from renewable stocks/waste through devices such as microbial fuel cells (MFCs). However, the application of this solution was limited by a moderately low efficiency. We explored the limits, if any, of additive manufacturing (AM) technology to fabricate a fully AM-based powering device, exploiting low density, open porosities able to host the microbes, systems easy to fuel continuously and to run safely. We obtained an optimal energy recovery close to 3 kWh m-3 per day that can power sensors and low-power appliances, allowing data processing and transmission from remote/harsh environments.
NASA Astrophysics Data System (ADS)
Li, Peng; Olmi, Claudio; Song, Gangbing
2010-04-01
Piezoceramic based transducers are widely researched and used for structural health monitoring (SHM) systems due to the piezoceramic material's inherent advantage of dual sensing and actuation. Wireless sensor network (WSN) technology benefits from advances made in piezoceramic based structural health monitoring systems, allowing easy and flexible installation, low system cost, and increased robustness over wired system. However, piezoceramic wireless SHM systems still faces some drawbacks, one of these is that the piezoceramic based SHM systems require relatively high computational capabilities to calculate damage information, however, battery powered WSN sensor nodes have strict power consumption limitation and hence limited computational power. On the other hand, commonly used centralized processing networks require wireless sensors to transmit all data back to the network coordinator for analysis. This signal processing procedure can be problematic for piezoceramic based SHM applications as it is neither energy efficient nor robust. In this paper, we aim to solve these problems with a distributed wireless sensor network for piezoceramic base structural health monitoring systems. Three important issues: power system, waking up from sleep impact detection, and local data processing, are addressed to reach optimized energy efficiency. Instead of sweep sine excitation that was used in the early research, several sine frequencies were used in sequence to excite the concrete structure. The wireless sensors record the sine excitations and compute the time domain energy for each sine frequency locally to detect the energy change. By comparing the data of the damaged concrete frame with the healthy data, we are able to find out the damage information of the concrete frame. A relative powerful wireless microcontroller was used to carry out the sampling and distributed data processing in real-time. The distributed wireless network dramatically reduced the data transmission between wireless sensor and the wireless coordinator, which in turn reduced the power consumption of the overall system.
BINGO: a code for the efficient computation of the scalar bi-spectrum
NASA Astrophysics Data System (ADS)
Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme
2013-05-01
We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter fNL to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.
2012-05-01
thermal energy storage system using molten silicon as a phase change material. A cylindrical receiver, absorber, converter system was evaluated using...temperature operation. This work computationally evaluates a thermal energy storage system using molten silicon as a phase change material. A cylindrical... salts ) offering a low power density and a low thermal conductivity, leading to a limited rate of charging and discharging (4). A focus on
Rational calculation accuracy in acousto-optical matrix-vector processor
NASA Astrophysics Data System (ADS)
Oparin, V. V.; Tigin, Dmitry V.
1994-01-01
The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.
Engineering incremental resistive switching in TaOx based memristors for brain-inspired computing
NASA Astrophysics Data System (ADS)
Wang, Zongwei; Yin, Minghui; Zhang, Teng; Cai, Yimao; Wang, Yangyuan; Yang, Yuchao; Huang, Ru
2016-07-01
Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament growth (dissolution) dynamics with time, therefore significantly improving the conductance modulation linearity that is desirable in neuromorphic systems. In addition, the introduction of the DLL also served to reduce the power consumption of the memristor, and important synaptic learning rules in biological brains such as spike timing dependent plasticity were successfully implemented using these optimized devices. This study could provide general implications for continued optimizations of memristor performance for neuromorphic applications, by carefully tuning the dynamics involved in filament growth and dissolution.Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament growth (dissolution) dynamics with time, therefore significantly improving the conductance modulation linearity that is desirable in neuromorphic systems. In addition, the introduction of the DLL also served to reduce the power consumption of the memristor, and important synaptic learning rules in biological brains such as spike timing dependent plasticity were successfully implemented using these optimized devices. This study could provide general implications for continued optimizations of memristor performance for neuromorphic applications, by carefully tuning the dynamics involved in filament growth and dissolution. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00476h
‘My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment
Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal
2015-01-01
While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions. PMID:26154513
An exploration of neuromorphic systems and related design issues/challenges in dark silicon era
NASA Astrophysics Data System (ADS)
Chandaliya, Mudit; Chaturvedi, Nitin; Gurunarayanan, S.
2018-03-01
The current microprocessors has shown a remarkable performance and memory capacity improvement since its innovation. However, due to power and thermal limitations, only a fraction of cores can operate at full frequency at any instant of time irrespective of the advantages of new technology generation. This phenomenon of under-utilization of microprocessor is called as dark silicon which leads to distraction in innovative computing. To overcome the limitation of utilization wall, IBM technologies explored and invented neurosynaptic system chips. It has opened a wide scope of research in the field of innovative computing, technology, material sciences, machine learning etc. In this paper, we first reviewed the diverse stages of research that have been influential in the innovation of neurosynaptic architectures. These, architectures focuses on the development of brain-like framework which is efficient enough to execute a broad set of computations in real time while maintaining ultra-low power consumption as well as area considerations in mind. We also reveal the inadvertent challenges and the opportunities of designing neuromorphic systems as presented by the existing technologies in the dark silicon era, which constitute the utmost area of research in future.
Development of hybrid computer plasma models for different pressure regimes
NASA Astrophysics Data System (ADS)
Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf
2016-09-01
With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).
Borresen, Jon; Lynch, Stephen
2012-01-01
In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria
2017-03-01
An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.
Upper Body-Based Power Wheelchair Control Interface for Individuals With Tetraplegia.
Thorp, Elias B; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica P; Pierella, Camilla; Roth, Elliot J; Seanez Gonzalez, Ismael; Mussa-Ivaldi, Ferdinando A
2016-02-01
Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user's residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional control commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control a power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control.
Robotic insects: Manufacturing, actuation, and power considerations
NASA Astrophysics Data System (ADS)
Wood, Robert
2015-12-01
As the characteristic size of a flying robot decreases, the challenges for successful flight revert to basic questions of fabrication, actuation, fluid mechanics, stabilization, and power - whereas such questions have in general been answered for larger aircraft. When developing a robot on the scale of a housefly, all hardware must be developed from scratch as there is nothing "off-the-shelf" which can be used for mechanisms, sensors, or computation that would satisfy the extreme mass and power limitations. With these challenges in mind, this talk will present progress in the essential technologies for insect-like robots with an emphasis on multi-scale manufacturing methods, high power density actuation, and energy-efficient power distribution.
Perspectives on the Future of CFD
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2000-01-01
This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.
Application of computational aero-acoustics to real world problems
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.
A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model
USDA-ARS?s Scientific Manuscript database
Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...
Convolutional networks for vehicle track segmentation
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2017-10-01
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple and fast models to label track pixels. These models, however, are unable to capture natural track features, such as continuity and parallelism. More powerful but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3×3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate in low power and have limited training data. As a result, we aim for small and efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our six-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
NASA Astrophysics Data System (ADS)
Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim
2011-03-01
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
Unified Performance and Power Modeling of Scientific Workloads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shuaiwen; Barker, Kevin J.; Kerbyson, Darren J.
2013-11-17
It is expected that scientific applications executing on future large-scale HPC must be optimized not only in terms of performance, but also in terms of power consumption. As power and energy become increasingly constrained resources, researchers and developers must have access to tools that will allow for accurate prediction of both performance and power consumption. Reasoning about performance and power consumption in concert will be critical for achieving maximum utilization of limited resources on future HPC systems. To this end, we present a unified performance and power model for the Nek-Bone mini-application developed as part of the DOE's CESAR Exascalemore » Co-Design Center. Our models consider the impact of computation, point-to-point communication, and collective communication« less
Dávid-Barrett, T.; Dunbar, R. I. M.
2013-01-01
Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623
Self-Directed Cooperative Planetary Rovers
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo; Morris, Robert (Technical Monitor)
2003-01-01
The project is concerned with the development of decision-theoretic techniques to optimize the scientific return of planetary rovers. Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We have developed a comprehensive solution to this problem that involves high-level tools to describe a mission; a compiler that maps a mission description and additional probabilistic models of the components of the rover into a Markov decision problem; and algorithms for solving the rover control problem that are sensitive to the limited computational resources and high-level of uncertainty in this domain.
Security and Cloud Outsourcing Framework for Economic Dispatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2012-04-01
By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.
Security and Cloud Outsourcing Framework for Economic Dispatch
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...
2017-04-24
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
NASA Astrophysics Data System (ADS)
Verma, H. K.; Mafidar, P.
2013-09-01
In view of growing concern towards environment, power system engineers are forced to generate quality green energy. Hence the economic dispatch (ED) aims at the power generation to meet the load demand at minimum fuel cost with environmental and voltage constraints along with essential constraints on real and reactive power. The emission control which reduces the negative impact on environment is achieved by including the additional constraints in ED problem. Presently, the power system mostly operates near its stability limits, therefore with increased demand the system faces voltage problem. The bus voltages are brought within limit in the present work by placement of static var compensator (SVC) at weak bus which is identified from bus participation factor. The optimal size of SVC is determined by univariate search method. This paper presents the use of Teaching Learning based Optimization (TLBO) algorithm for voltage stable environment friendly ED problem with real and reactive power constraints. The computational effectiveness of TLBO is established through test results over particle swarm optimization (PSO) and Big Bang-Big Crunch (BB-BC) algorithms for the ED problem.
A Malicious Pattern Detection Engine for Embedded Security Systems in the Internet of Things
Oh, Doohwan; Kim, Deokho; Ro, Won Woo
2014-01-01
With the emergence of the Internet of Things (IoT), a large number of physical objects in daily life have been aggressively connected to the Internet. As the number of objects connected to networks increases, the security systems face a critical challenge due to the global connectivity and accessibility of the IoT. However, it is difficult to adapt traditional security systems to the objects in the IoT, because of their limited computing power and memory size. In light of this, we present a lightweight security system that uses a novel malicious pattern-matching engine. We limit the memory usage of the proposed system in order to make it work on resource-constrained devices. To mitigate performance degradation due to limitations of computation power and memory, we propose two novel techniques, auxiliary shifting and early decision. Through both techniques, we can efficiently reduce the number of matching operations on resource-constrained systems. Experiments and performance analyses show that our proposed system achieves a maximum speedup of 2.14 with an IoT object and provides scalable performance for a large number of patterns. PMID:25521382
Traditional Tracking with Kalman Filter on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; MacNeill, Ian; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2015-05-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...
2017-08-17
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Essential Use Cases for Pedagogical Patterns
ERIC Educational Resources Information Center
Derntl, Michael; Botturi, Luca
2006-01-01
Coming from architecture, through computer science, pattern-based design spread into other disciplines and is nowadays recognized as a powerful way of capturing and reusing effective design practice. However, current pedagogical pattern approaches lack widespread adoption, both by users and authors, and are still limited to individual initiatives.…
USDA-ARS?s Scientific Manuscript database
Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loef, P.A.; Smed, T.; Andersson, G.
The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less
Dense and Sparse Matrix Operations on the Cell Processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel W.; Shalf, John; Oliker, Leonid
2005-05-01
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Deep Learning in Medical Imaging: General Overview
Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae
2017-01-01
The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging. PMID:28670152
Deep Learning in Medical Imaging: General Overview.
Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug
2017-01-01
The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.
A Real Time Controller For Applications In Smart Structures
NASA Astrophysics Data System (ADS)
Ahrens, Christian P.; Claus, Richard O.
1990-02-01
Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.
Nonlinear power flow feedback control for improved stability and performance of airfoil sections
Wilson, David G.; Robinett, III, Rush D.
2013-09-03
A computer-implemented method of determining the pitch stability of an airfoil system, comprising using a computer to numerically integrate a differential equation of motion that includes terms describing PID controller action. In one model, the differential equation characterizes the time-dependent response of the airfoil's pitch angle, .alpha.. The computer model calculates limit-cycles of the model, which represent the stability boundaries of the airfoil system. Once the stability boundary is known, feedback control can be implemented, by using, for example, a PID controller to control a feedback actuator. The method allows the PID controller gain constants, K.sub.I, K.sub.p, and K.sub.d, to be optimized. This permits operation closer to the stability boundaries, while preventing the physical apparatus from unintentionally crossing the stability boundaries. Operating closer to the stability boundaries permits greater power efficiencies to be extracted from the airfoil system.
Fast distributed large-pixel-count hologram computation using a GPU cluster.
Pan, Yuechao; Xu, Xuewu; Liang, Xinan
2013-09-10
Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.
Borresen, Jon; Lynch, Stephen
2012-01-01
In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
Evaluation of SAR in a human body model due to wireless power transmission in the 10 MHz band.
Laakso, Ilkka; Tsuchida, Shogo; Hirata, Akimasa; Kamimura, Yoshitsugu
2012-08-07
This study discusses a computational method for calculating the specific absorption rate (SAR) due to a wireless power transmission system in the 10 MHz frequency band. A two-step quasi-static method comprised of the method of moments and the scalar potential finite-difference method are proposed. The applicability of the quasi-static approximation for localized exposure in this frequency band is discussed by comparing the SAR in a lossy dielectric cylinder computed with a full-wave electromagnetic analysis and the quasi-static approximation. From the computational results, the input impedance of the resonant coils was affected by the existence of the cylinder. On the other hand, the magnetic field distribution in free space and considering the cylinder and an impedance matching circuit were in good agreement; the maximum difference in the amplitude of the magnetic field was 4.8%. For a cylinder-coil distance of 10 mm, the difference between the peak 10 g averaged SAR in the cylinder computed with the full-wave electromagnetic method and our quasi-static method was 7.8%. These results suggest that the quasi-static approach is applicable for conducting the dosimetry of wireless power transmission in the 10 MHz band. With our two-step quasi-static method, the SAR in the anatomically based model was computed for different exposure scenarios. From those computations, the allowable input power satisfying the limit of a peak 10 g averaged SAR of 2.0 W kg(-1) was 830 W in the worst case exposure scenario with a coil positioned at a distance of 30 mm from the chest.
Computations of Combustion-Powered Actuation for Dynamic Stall Suppression
NASA Technical Reports Server (NTRS)
Jee, Solkeun; Bowles, Patrick O.; Matalanis, Claude G.; Min, Byung-Young; Wake, Brian E.; Crittenden, Tom; Glezer, Ari
2016-01-01
A computational framework for the simulation of dynamic stall suppression with combustion-powered actuation (COMPACT) is validated against wind tunnel experimental results on a VR-12 airfoil. COMPACT slots are located at 10% chord from the leading edge of the airfoil and directed tangentially along the suction-side surface. Helicopter rotor-relevant flow conditions are used in the study. A computationally efficient two-dimensional approach, based on unsteady Reynolds-averaged Navier-Stokes (RANS), is compared in detail against the baseline and the modified airfoil with COMPACT, using aerodynamic forces, pressure profiles, and flow-field data. The two-dimensional RANS approach predicts baseline static and dynamic stall very well. Most of the differences between the computational and experimental results are within two standard deviations of the experimental data. The current framework demonstrates an ability to predict COMPACT efficacy across the experimental dataset. Enhanced aerodynamic lift on the downstroke of the pitching cycle due to COMPACT is well predicted, and the cycleaveraged lift enhancement computed is within 3% of the test data. Differences with experimental data are discussed with a focus on three-dimensional features not included in the simulations and the limited computational model for COMPACT.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
Natural three-qubit interactions in one-way quantum computing
NASA Astrophysics Data System (ADS)
Tame, M. S.; Paternostro, M.; Kim, M. S.; Vedral, V.
2006-02-01
We address the effects of natural three-qubit interactions on the computational power of one-way quantum computation. A benefit of using more sophisticated entanglement structures is the ability to construct compact and economic simulations of quantum algorithms with limited resources. We show that the features of our study are embodied by suitably prepared optical lattices, where effective three-spin interactions have been theoretically demonstrated. We use this to provide a compact construction for the Toffoli gate. Information flow and two-qubit interactions are also outlined, together with a brief analysis of relevant sources of imperfection.
NASA Technical Reports Server (NTRS)
Kuczmarski, Maria A.; Neudeck, Philip G.
2000-01-01
Most solid-state electronic devices diodes, transistors, and integrated circuits are based on silicon. Although this material works well for many applications, its properties limit its ability to function under extreme high-temperature or high-power operating conditions. Silicon carbide (SiC), with its desirable physical properties, could someday replace silicon for these types of applications. A major roadblock to realizing this potential is the quality of SiC material that can currently be produced. Semiconductors require very uniform, high-quality material, and commercially available SiC tends to suffer from defects in the crystalline structure that have largely been eliminated in silicon. In some power circuits, these defects can focus energy into an extremely small area, leading to overheating that can damage the device. In an effort to better understand the way that these defects affect the electrical performance and reliability of an SiC device in a power circuit, the NASA Glenn Research Center at Lewis Field began an in-house three-dimensional computational modeling effort. The goal is to predict the temperature distributions within a SiC diode structure subjected to the various transient overvoltage breakdown stresses that occur in power management circuits. A commercial computational fluid dynamics computer program (FLUENT-Fluent, Inc., Lebanon, New Hampshire) was used to build a model of a defect-free SiC diode and generate a computational mesh. A typical breakdown power density was applied over 0.5 msec in a heated layer at the junction between the p-type SiC and n-type SiC, and the temperature distribution throughout the diode was then calculated. The peak temperature extracted from the computational model agreed well (within 6 percent) with previous first-order calculations of the maximum expected temperature at the end of the breakdown pulse. This level of agreement is excellent for a model of this type and indicates that three-dimensional computational modeling can provide useful predictions for this class of problem. The model is now being extended to include the effects of crystal defects. The model will provide unique insights into how high the temperature rises in the vicinity of the defects in a diode at various power densities and pulse durations. This information also will help researchers in understanding and designing SiC devices for safe and reliable operation in high-power circuits.
ERIC Educational Resources Information Center
LaGrandeur, Kevin
Computer mediated communication (CMC) tends to erase power structures because such communication somehow undermines or escapes discursive limits. Online discussions seem to promote rhetorical experimentation on the part of the participants. Finding a way to explain disparities between electronic discussion and oral discussion has proven difficult.…
NASA Technical Reports Server (NTRS)
Holman, gordon; Dennis Brian R.; Tolbert, Anne K.; Schwartz, Richard
2010-01-01
Solar nonthermal hard X-ray (HXR) flare spectra often cannot be fitted by a single power law, but rather require a downward break in the photon spectrum. A possible explanation for this spectral break is nonuniform ionization in the emission region. We have developed a computer code to calculate the photon spectrum from electrons with a power-law distribution injected into a thick-target in which the ionization decreases linearly from 100% to zero. We use the bremsstrahlung cross-section from Haug (1997), which closely approximates the full relativistic Bethe-Heitler cross-section, and compare photon spectra computed from this model with those obtained by Kontar, Brown and McArthur (2002), who used a step-function ionization model and the Kramers approximation to the cross-section. We find that for HXR spectra from a target with nonuniform ionization, the difference (Delta-gamma) between the power-law indexes above and below the break has an upper limit between approx.0.2 and 0.7 that depends on the power-law index delta of the injected electron distribution. A broken power-law spectrum with a. higher value of Delta-gamma cannot result from nonuniform ionization alone. The model is applied to spectra obtained around the peak times of 20 flares observed by the Ramaty High Energy Solar Spectroscopic Imager (RHESSI from 2002 to 2004 to determine whether thick-target nonuniform ionization can explain the measured spectral breaks. A Monte Carlo method is used to determine the uncertainties of the best-fit parameters, especially on Delta-gamma. We find that 15 of the 20 flare spectra require a downward spectral break and that at least 6 of these could not be explained by nonuniform ionization alone because they had values of Delta-gamma with less than a 2.5% probability of being consistent with the computed upper limits from the model. The remaining 9 flare spectra, based on this criterion, are consistent with the nonuniform ionization model.
Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, Naresh; Baone, Chaitanya; Veda, Santosh
2014-12-31
Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less
Computation in generalised probabilisitic theories
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Barrett, Jonathan
2015-08-01
From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.
Memory management in genome-wide association studies
2009-01-01
Genome-wide association is a powerful tool for the identification of genes that underlie common diseases. Genome-wide association studies generate billions of genotypes and pose significant computational challenges for most users including limited computer memory. We applied a recently developed memory management tool to two analyses of North American Rheumatoid Arthritis Consortium studies and measured the performance in terms of central processing unit and memory usage. We conclude that our memory management approach is simple, efficient, and effective for genome-wide association studies. PMID:20018047
Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls
Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.
2013-01-01
As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950
NASA Technical Reports Server (NTRS)
Morris, C. E. K., Jr.
1981-01-01
Each cycle of the flight profile consists of climb while the vehicle is tracked and powered by a microwave beam, followed by gliding flight back to a minimum altitude. Parameter variations were used to define the effects of changes in the characteristics of the airplane aerodynamics, the power transmission systems, the propulsion system, and winds. Results show that wind effects limit the reduction of wing loading and increase the lift coefficient, two effective ways to obtain longer range and endurance for each flight cycle. Calculated climb performance showed strong sensitivity to some power and propulsion parameters. A simplified method of computing gliding endurance was developed.
Upper Body-Based Power Wheelchair Control Interface for Individuals with Tetraplegia
Thorp, Elias B.; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica; Pierella, Camilla; Roth, Elliot J.; Gonzalez, Ismael Seanez; Mussa-Ivaldi, Ferdinando A.
2016-01-01
Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user’s residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional controls commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control the power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control. PMID:26054071
Superadiabatic holonomic quantum computation in cavity QED
NASA Astrophysics Data System (ADS)
Liu, Bao-Jie; Huang, Zhen-Hua; Xue, Zheng-Yuan; Zhang, Xin-Ding
2017-06-01
Adiabatic quantum control is a powerful tool for quantum engineering and a key component in some quantum computation models, where accurate control over the timing of the involved pulses is not needed. However, the adiabatic condition requires that the process be very slow and thus limits its application in quantum computation, where quantum gates are preferred to be fast due to the limited coherent times of the quantum systems. Here, we propose a feasible scheme to implement universal holonomic quantum computation based on non-Abelian geometric phases with superadiabatic quantum control, where the adiabatic manipulation is sped up while retaining its robustness against errors in the timing control. Consolidating the advantages of both strategies, our proposal is thus both robust and fast. The cavity QED system is adopted as a typical example to illustrate the merits where the proposed scheme can be realized in a tripod configuration by appropriately controlling the pulse shapes and their relative strength. To demonstrate the distinct performance of our proposal, we also compare our scheme with the conventional adiabatic strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Kai; Qi, Junjian; Kang, Wei
2016-08-01
Growing penetration of intermittent resources such as renewable generations increases the risk of instability in a power grid. This paper introduces the concept of observability and its computational algorithms for a power grid monitored by the wide-area measurement system (WAMS) based on synchrophasors, e.g. phasor measurement units (PMUs). The goal is to estimate real-time states of generators, especially for potentially unstable trajectories, the information that is critical for the detection of rotor angle instability of the grid. The paper studies the number and siting of synchrophasors in a power grid so that the state of the system can be accuratelymore » estimated in the presence of instability. An unscented Kalman filter (UKF) is adopted as a tool to estimate the dynamic states that are not directly measured by synchrophasors. The theory and its computational algorithms are illustrated in detail by using a 9-bus 3-generator power system model and then tested on a 140-bus 48-generator Northeast Power Coordinating Council power grid model. Case studies on those two systems demonstrate the performance of the proposed approach using a limited number of synchrophasors for dynamic state estimation for stability assessment and its robustness against moderate inaccuracies in model parameters.« less
Orthorectification by Using Gpgpu Method
NASA Astrophysics Data System (ADS)
Sahin, H.; Kulur, S.
2012-07-01
Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.
Bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling.
Lent, Craig S; Liu, Mo; Lu, Yuhui
2006-08-28
We examine power dissipation in different clocking schemes for molecular quantum-dot cellular automata (QCA) circuits. 'Landauer clocking' involves the adiabatic transition of a molecular cell from the null state to an active state carrying data. Cell layout creates devices which allow data in cells to interact and thereby perform useful computation. We perform direct solutions of the equation of motion for the system in contact with the thermal environment and see that Landauer's Principle applies: one must dissipate an energy of at least k(B)T per bit only when the information is erased. The ideas of Bennett can be applied to keep copies of the bit information by echoing inputs to outputs, thus embedding any logically irreversible circuit in a logically reversible circuit, at the cost of added circuit complexity. A promising alternative which we term 'Bennett clocking' requires only altering the timing of the clocking signals so that bit information is simply held in place by the clock until a computational block is complete, then erased in the reverse order of computation. This approach results in ultralow power dissipation without additional circuit complexity. These results offer a concrete example in which to consider recent claims regarding the fundamental limits of binary logic scaling.
Bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling
NASA Astrophysics Data System (ADS)
Lent, Craig S.; Liu, Mo; Lu, Yuhui
2006-08-01
We examine power dissipation in different clocking schemes for molecular quantum-dot cellular automata (QCA) circuits. 'Landauer clocking' involves the adiabatic transition of a molecular cell from the null state to an active state carrying data. Cell layout creates devices which allow data in cells to interact and thereby perform useful computation. We perform direct solutions of the equation of motion for the system in contact with the thermal environment and see that Landauer's Principle applies: one must dissipate an energy of at least kBT per bit only when the information is erased. The ideas of Bennett can be applied to keep copies of the bit information by echoing inputs to outputs, thus embedding any logically irreversible circuit in a logically reversible circuit, at the cost of added circuit complexity. A promising alternative which we term 'Bennett clocking' requires only altering the timing of the clocking signals so that bit information is simply held in place by the clock until a computational block is complete, then erased in the reverse order of computation. This approach results in ultralow power dissipation without additional circuit complexity. These results offer a concrete example in which to consider recent claims regarding the fundamental limits of binary logic scaling.
MindEdit: A P300-based text editor for mobile devices.
Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M
2017-01-01
Practical application of Brain-Computer Interfaces (BCIs) requires that the whole BCI system be portable. The mobility of BCI systems involves two aspects: making the electroencephalography (EEG) recording devices portable, and developing software applications with low computational complexity to be able to run on low computational-power devices such as tablets and smartphones. This paper addresses the development of MindEdit; a P300-based text editor for Android-based devices. Given the limited resources of mobile devices and their limited computational power, a novel ensemble classifier is utilized that uses Principal Component Analysis (PCA) features to identify P300 evoked potentials from EEG recordings. PCA computations in the proposed method are channel-based as opposed to concatenating all channels as in traditional feature extraction methods; thus, this method has less computational complexity compared to traditional P300 detection methods. The performance of the method is demonstrated on data recorded from MindEdit on an Android tablet using the Emotiv wireless neuroheadset. Results demonstrate the capability of the introduced PCA ensemble classifier to classify P300 data with maximum average accuracy of 78.37±16.09% for cross-validation data and 77.5±19.69% for online test data using only 10 trials per symbol and a 33-character training dataset. Our analysis indicates that the introduced method outperforms traditional feature extraction methods. For a faster operation of MindEdit, a variable number of trials scheme is introduced that resulted in an online average accuracy of 64.17±19.6% and a maximum bitrate of 6.25bit/min. These results demonstrate the efficacy of using the developed BCI application with mobile devices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Quantum-assisted biomolecular modelling.
Harris, Sarah A; Kendon, Vivien M
2010-08-13
Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.
The NANOGrav Nine-Year Data Set: Limits on the Isotropic Stochastic Gravitational Wave Background
NASA Technical Reports Server (NTRS)
Arzoumanian, Z.; Brazier, A.; Burke-Spolaor, S.; Chamberlin, S. J.; Chatterjee, S.; Christy, B.; Cordes, J. M.; Cornish, N. J.; Crowter, K.; Demorest, P. B.;
2016-01-01
We compute upper limits on the nanohertz-frequency isotropic stochastic gravitational wave background (GWB) using the 9 year data set from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration. Well-tested Bayesian techniques are used to set upper limits on the dimensionless strain amplitude (at a frequency of 1 yr(exp -1) for a GWB from supermassive black hole binaries of A(sub gw) less than 1.5 x 10(exp -15). We also parameterize the GWB spectrum with a broken power-law model by placing priors on the strain amplitude derived from simulations of Sesana and McWilliams et al. Using Bayesian model selection we find that the data favor a broken power law to a pure power law with odds ratios of 2.2 and 22 to one for the Sesana and McWilliams prior models, respectively. Using the broken power-law analysis we construct posterior distributions on environmental factors that drive the binary to the GW-driven regime including the stellar mass density for stellar-scattering, mass accretion rate for circumbinary disk interaction, and orbital eccentricity for eccentric binaries, marking the first time that the shape of the GWB spectrum has been used to make astrophysical inferences. Returning to a power-law model, we place stringent limits on the energy density of relic GWs, OMEGA(sub gw) (f) h squared less than 4.2 x 10(exp -10). Our limit on the cosmic string GWB, OMEGA(sub gw) (f) h squared less than 2.2 x 10(exp -10), translates to a conservative limit on the cosmic string tension with G mu less than 3.3 x 10(exp -8), a factor of four better than the joint Planck and high-l‚ cosmic microwave background data from other experiments.
Profiling an application for power consumption during execution on a compute node
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-09-17
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
PS3 CELL Development for Scientific Computation and Research
NASA Astrophysics Data System (ADS)
Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.
2007-12-01
The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.
NASA Astrophysics Data System (ADS)
Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem
2017-11-01
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...
2017-10-24
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P
2010-10-30
Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
ERIC Educational Resources Information Center
Durrett, John; Trezona, Judi
1982-01-01
Discusses physiological and psychological aspects of color. Includes guidelines for using color effectively, especially in the development of computer programs. Indicates that if applied with its limitations and requirements in mind, color can be a powerful manipulator of attention, memory, and understanding. (Author/JN)
An Advanced Simulation Framework for Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Li, P. P.; Tyrrell, R. Yeung D.; Adhami, N.; Li, T.; Henry, H.
1994-01-01
Discrete-event simulation (DEVS) users have long been faced with a three-way trade-off of balancing execution time, model fidelity, and number of objects simulated. Because of the limits of computer processing power the analyst is often forced to settle for less than desired performances in one or more of these areas.
NASA Astrophysics Data System (ADS)
Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.
2018-02-01
The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.
Convolutional networks for vehicle track segmentation
Quach, Tu-Thach
2017-08-19
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are unable to capture natural track features such as continuity and parallelism. More powerful, but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3-by-3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate inmore » low power and have limited training data. As a result, we aim for small, efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our 6-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.« less
Convolutional networks for vehicle track segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quach, Tu-Thach
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are unable to capture natural track features such as continuity and parallelism. More powerful, but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3-by-3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate inmore » low power and have limited training data. As a result, we aim for small, efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our 6-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.« less
Plasma separation process. Betacell (BCELL) code, user's manual
NASA Astrophysics Data System (ADS)
Taherzadeh, M.
1987-11-01
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.
Profiling an application for power consumption during execution on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2012-08-21
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagan, Mike; Schlachter, Jeremy; Yoshii, Kazutomo
Abstract—Energy and power consumption are major limitations to continued scaling of computing systems. Inexactness where the quality of the solution can be traded for energy savings has been proposed as a counterintuitive approach to overcoming those limitation. However, in the past, inexactness has been necessitated the need for highly customized or specialized hardware. In order to move away from customization, in earlier work [4], it was shown that by interpreting precision in the computation to be the parameter to trade to achieve inexactness, weather prediction and page rank could both benefit in terms of yielding energy savings through reduced precision,more » while preserving the quality of the application. However, this required representations of numbers that were not readily available on commercial off-the-shelf (COTS) processors. In this paper, we provide opportunities for extending the the notion of trading precision for energy savings into the world COTS. We provide a model and analyze the opportunities and behavior of all three IEEE compliant precision values available on COTS processors: (i) double (ii) single, and (iii) half. Through measurements, we show through a limit study energy savings in going from double to half precision can potentially exceed a factor of four, largely due to memory and cache effects.« less
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Searching for periodic sources with LIGO. II. Hierarchical searches
NASA Astrophysics Data System (ADS)
Brady, Patrick R.; Creighton, Teviet
2000-04-01
The detection of quasi-periodic sources of gravitational waves requires the accumulation of signal to noise over long observation times. This represents the most difficult data analysis problem facing experimenters with detectors such as those at LIGO. If not removed, Earth-motion induced Doppler modulations and intrinsic variations of the gravitational-wave frequency make the signals impossible to detect. These effects can be corrected (removed) using a parametrized model for the frequency evolution. In a previous paper, we introduced such a model and computed the number of independent parameter space points for which corrections must be applied to the data stream in a coherent search. Since this number increases with the observation time, the sensitivity of a search for continuous gravitational-wave signals is computationally bound when data analysis proceeds at a similar rate to data acquisition. In this paper, we extend the formalism developed by Brady et al. [Phys. Rev. D 57, 2101 (1998)], and we compute the number of independent corrections Np(ΔT,N) required for incoherent search strategies. These strategies rely on the method of stacked power spectra-a demodulated time series is divided into N segments of length ΔT, each segment is Fourier transformed, a power spectrum is computed, and the N spectra are summed up. This method is incoherent; phase information is lost from segment to segment. Nevertheless, power from a signal with fixed frequency (in the corrected time series) is accumulated in a single frequency bin, and amplitude signal to noise accumulates as ~N1/4 (assuming the segment length ΔT is held fixed). For fixed available computing power, there are optimal values for N and ΔT which maximize the sensitivity of a search in which data analysis takes a total time NΔT. We estimate that the optimal sensitivity of an all-sky search that uses incoherent stacks is a factor of 2-4 better than achieved using coherent Fourier transforms, assuming the same available computing power; incoherent methods are computationally efficient at exploring large parameter spaces. We also consider a two-stage hierarchical search in which candidate events from a search using short data segments are followed up in a search using longer data segments. This hierarchical strategy yields a further 20-60 % improvement in sensitivity in all-sky (or directed) searches for old (>=1000 yr) slow (<=200 Hz) pulsars, and for young (>=40 yr) fast (<=1000 Hz) pulsars. Assuming enhanced LIGO detectors (LIGO-II) and 1012 flops of effective computing power, we examine the sensitivity to sources in three specialized classes. A limited area search for pulsars in the Galactic core would detect objects with gravitational ellipticities of ɛ>~5×10-6 at 200 Hz; such limits provide information about the strength of the crust in neutron stars. Gravitational waves emitted by unstable r-modes of newborn neutron stars would be detected out to distances of ~8 Mpc, if the r-modes saturate at a dimensionless amplitude of order unity and an optical supernova provides the position of the source on the sky. In searches targeting low-mass x-ray binary systems (in which accretion-driven spin up is balanced by gravitational-wave spin down), it is important to use information from electromagnetic observations to determine the orbital parameters as accurately as possible. An estimate of the difficulty of these searches suggests that objects with x-ray fluxes exceeding 2×10-8 erg cm-2 s-1 would be detected using the enhanced interferometers in their broadband configuration. This puts Sco X-1 on the verge of detectability in a broadband search; the amplitude signal to noise would be increased by a factor of order ~5-10 by operating the interferometer in a signal-recycled, narrow-band configuration. Further work is needed to determine the optimal search strategy when limited information is available about the frequency evolution of a source in a targeted search.
NASA Astrophysics Data System (ADS)
Tanner, Steve; Stein, Cara; Graves, Sara J.
Networks of remote sensors are becoming more common as technology improves and costs decline. In the past, a remote sensor was usually a device that collected data to be retrieved at a later time by some other mechanism. This collected data were usually processed well after the fact at a computer greatly removed from the in situ sensing location. This has begun to change as sensor technology, on-board processing, and network communication capabilities have increased and their prices have dropped. There has been an explosion in the number of sensors and sensing devices, not just around the world, but literally throughout the solar system. These sensors are not only becoming vastly more sophisticated, accurate, and detailed in the data they gather but they are also becoming cheaper, lighter, and smaller. At the same time, engineers have developed improved methods to embed computing systems, memory, storage, and communication capabilities into the platforms that host these sensors. Now, it is not unusual to see large networks of sensors working in cooperation with one another. Nor does it seem strange to see the autonomous operation of sensorbased systems, from space-based satellites to smart vacuum cleaners that keep our homes clean and robotic toys that help to entertain and educate our children. But access to sensor data and computing power is only part of the story. For all the power of these systems, there are still substantial limits to what they can accomplish. These include the well-known limits to current Artificial Intelligence capabilities and our limited ability to program the abstract concepts, goals, and improvisation needed for fully autonomous systems. But it also includes much more basic engineering problems such as lack of adequate power, communications bandwidth, and memory, as well as problems with the geolocation and real-time georeferencing required to integrate data from multiple sensors to be used together.
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2016-11-01
We report results of a deep all-sky search for periodic gravitational waves from isolated neutron stars in data from the S6 LIGO science run. The search was possible thanks to the computing power provided by the volunteers of the Einstein@Home distributed computing project. We find no significant signal candidate and set the most stringent upper limits to date on the amplitude of gravitational wave signals from the target population. At the frequency of best strain sensitivity, between 170.5 and 171 Hz we set a 90% confidence upper limit of 5.5 ×10-25 , while at the high end of our frequency range, around 505 Hz, we achieve upper limits ≃10-24 . At 230 Hz we can exclude sources with ellipticities greater than 10-6 within 100 pc of Earth with fiducial value of the principal moment of inertia of 1038 kg m2 . If we assume a higher (lower) gravitational wave spin-down we constrain farther (closer) objects to higher (lower) ellipticities.
Spacecraft solid state power distribution switch
NASA Technical Reports Server (NTRS)
Praver, G. A.; Theisinger, P. C.
1986-01-01
As a spacecraft performs its mission, various loads are connected to the spacecraft power bus in response to commands from an on board computer, a function called power distribution. For the Mariner Mark II set of planetary missions, the power bus is 30 volts dc and when loads are connected or disconnected, both the bus and power return side must be switched. In addition, the power distribution function must be immune to single point failures and, when power is first applied, all switches must be in a known state. Traditionally, these requirements have been met by electromechanical latching relays. This paper describes a solid state switch which not only satisfies the requirements but incorporates several additional features including soft turn on, programmable current trip point with noise immunity, instantaneous current limiting, and direct telemetry of load currents and switch status. A breadboard of the design has been constructed and some initial test results are included.
The LHCb software and computing upgrade for Run 3: opportunities and challenges
NASA Astrophysics Data System (ADS)
Bozzi, C.; Roiser, S.; LHCb Collaboration
2017-10-01
The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-10
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.
Control of a solar-energy-supplied electrical-power system without intermediate circuitry
NASA Astrophysics Data System (ADS)
Leistner, K.
A computer control system is developed for electric-power systems comprising solar cells and small numbers of users with individual centrally controlled converters (and storage facilities when needed). Typical system structures are reviewed; the advantages of systems without an intermediate network are outlined; the demands on a control system in such a network (optimizing generator working point and power distribution) are defined; and a flexible modular prototype system is described in detail. A charging station for lead batteries used in electric automobiles is analyzed as an example. The power requirements of the control system (30 W for generator control and 50 W for communications and distribution control) are found to limit its use to larger networks.
NASA Technical Reports Server (NTRS)
Fromme, J.; Golberg, M.; Werth, J.
1979-01-01
The numerical computation of unsteady airloads acting upon thin airfoils with multiple leading and trailing-edge controls in two-dimensional ventilated subsonic wind tunnels is studied. The foundation of the computational method is strengthened with a new and more powerful mathematical existence and convergence theory for solving Cauchy singular integral equations of the first kind, and the method of convergence acceleration by extrapolation to the limit is introduced to analyze airfoils with flaps. New results are presented for steady and unsteady flow, including the effect of acoustic resonance between ventilated wind-tunnel walls and airfoils with oscillating flaps. The computer program TWODI is available for general use and a complete set of instructions is provided.
Enabling Earth Science: The Facilities and People of the NCCS
NASA Technical Reports Server (NTRS)
2002-01-01
The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
Single-chip microprocessor that communicates directly using light
NASA Astrophysics Data System (ADS)
Sun, Chen; Wade, Mark T.; Lee, Yunsup; Orcutt, Jason S.; Alloatti, Luca; Georgas, Michael S.; Waterman, Andrew S.; Shainline, Jeffrey M.; Avizienis, Rimas R.; Lin, Sen; Moss, Benjamin R.; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H.; Cook, Henry M.; Ou, Albert J.; Leu, Jonathan C.; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J.; Popović, Miloš A.; Stojanović, Vladimir M.
2015-12-01
Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
Single-chip microprocessor that communicates directly using light.
Sun, Chen; Wade, Mark T; Lee, Yunsup; Orcutt, Jason S; Alloatti, Luca; Georgas, Michael S; Waterman, Andrew S; Shainline, Jeffrey M; Avizienis, Rimas R; Lin, Sen; Moss, Benjamin R; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H; Cook, Henry M; Ou, Albert J; Leu, Jonathan C; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J; Popović, Miloš A; Stojanović, Vladimir M
2015-12-24
Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems--from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a 'zero-change' approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
Description of a 20 kilohertz power distribution system
NASA Technical Reports Server (NTRS)
Hansen, I. G.
1986-01-01
A single phase, 440 VRMS, 20 kHz power distribution system with a regulated sinusoidal wave form is discussed. A single phase power system minimizes the wiring, sensing, and control complexities required in a multi-sourced redundantly distributed power system. The single phase addresses only the distribution links multiphase lower frequency inputs and outputs accommodation techniques are described. While the 440 V operating potential was initially selected for aircraft operating below 50,000 ft, this potential also appears suitable for space power systems. This voltage choice recognizes a reasonable upper limit for semiconductor ratings, yet will direct synthesis of 220 V, 3 power. A 20 kHz operating frequency was selected to be above the range of audibility, minimize the weight of reactive components, yet allow the construction of single power stages of 25 to 30 kW. The regulated sinusoidal distribution system has several advantages. With a regulated voltage, most ac/dc conversions involve rather simple transformer rectifier applications. A sinusoidal distribution system, when used in conjunction with zero crossing switching, represents a minimal source of EMI. The present state of 20 kHz power technology includes computer controls of voltage and/or frequency, low inductance cable, current limiting circuit protection, bi-directional power flow, and motor/generator operating using standard induction machines. A status update and description of each of these items and their significance is presented.
Description of a 20 Kilohertz power distribution system
NASA Technical Reports Server (NTRS)
Hansen, I. G.
1986-01-01
A single phase, 440 VRMS, 20 kHz power distribution system with a regulated sinusoidal wave form is discussed. A single phase power system minimizes the wiring, sensing, and control complexities required in a multi-sourced redundantly distributed power system. The single phase addresses only the distribution link; mulitphase lower frequency inputs and outputs accommodation techniques are described. While the 440 V operating potential was initially selected for aircraft operating below 50,000 ft, this potential also appears suitable for space power systems. This voltage choice recognizes a reasonable upper limit for semiconductor ratings, yet will direct synthesis of 220 V, 3 power. A 20 kHz operating frequency was selected to be above the range of audibility, minimize the weight of reactive components, yet allow the construction of single power stages of 25 to 30 kW. The regulated sinusoidal distribution system has several advantages. With a regulated voltage, most ac/dc conversions involve rather simple transformer rectifier applications. A sinusoidal distribution system, when used in conjunction with zero crossing switching, represents a minimal source of EMI. The present state of 20 kHz power technology includes computer controls of voltage and/or frequency, low inductance cable, current limiting circuit protection, bi-directional power flow, and motor/generator operating using standard induction machines. A status update and description of each of these items and their significance is presented.
Review on open source operating systems for internet of things
NASA Astrophysics Data System (ADS)
Wang, Zhengmin; Li, Wei; Dong, Huiliang
2017-08-01
Internet of Things (IoT) is an environment in which everywhere and every device became smart in a smart world. Internet of Things is growing vastly; it is an integrated system of uniquely identifiable communicating devices which exchange information in a connected network to provide extensive services. IoT devices have very limited memory, computational power, and power supply. Traditional operating systems (OS) have no way to meet the needs of IoT systems. In this paper, we thus analyze the challenges of IoT OS and survey applicable open source OSs.
Bhargava, Puneet; Lackey, Amanda E; Dhand, Sabeen; Moshiri, Mariam; Jambhekar, Kedar; Pandey, Tarun
2013-03-01
We are in the midst of an evolving educational revolution. Use of digital devices such as smart phones and tablet computers is rapidly increasing among radiologists who now regularly use them for medical, technical, and administrative tasks. These electronic tools provide a wide array of new tools to the radiologists allowing for faster, more simplified, and widespread distribution of educational material. The utility, future potential, and limitations of some these powerful tools are discussed in this article. Published by Elsevier Inc.
A review on economic emission dispatch problems using quantum computational intelligence
NASA Astrophysics Data System (ADS)
Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.
2016-11-01
Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.
Simple geometric algorithms to aid in clearance management for robotic mechanisms
NASA Technical Reports Server (NTRS)
Copeland, E. L.; Ray, L. D.; Peticolas, J. D.
1981-01-01
Global geometric shapes such as lines, planes, circles, spheres, cylinders, and the associated computational algorithms which provide relatively inexpensive estimates of minimum spatial clearance for safe operations were selected. The Space Shuttle, remote manipulator system, and the Power Extension Package are used as an example. Robotic mechanisms operate in quarters limited by external structures and the problem of clearance is often of considerable interest. Safe clearance management is simple and suited to real time calculation, whereas contact prediction requires more precision, sophistication, and computational overhead.
Reinforcement learning techniques for controlling resources in power networks
NASA Astrophysics Data System (ADS)
Kowli, Anupama Sunil
As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-06-05
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.
The Pitfalls of Mobile Devices in Learning: A Different View and Implications for Pedagogical Design
ERIC Educational Resources Information Center
Ting, Yu-Liang
2012-01-01
Studies have been devoted to the design, implementation, and evaluation of mobile learning in practice. A common issue among students' responses toward this type of learning concerns the pitfalls of mobile devices, including small screen, limited input options, and low computational power. As a result, mobile devices are not always perceived by…
Skylab program CSM verification analysis report
NASA Technical Reports Server (NTRS)
Schaefer, J. L.; Vanderpol, G. A.
1970-01-01
The application of the SINDA computer program for the transient thermodynamic simulation of the Apollo fuel cell/radiator system for the limit condition of the proposed Skylab mission is described. Results are included for the thermal constraints imposed upon the Pratt and Whitney fuel cell power capability by the Block 2 EPS radiator system operating under the Skylab fixed attitude orbits.
NASA Astrophysics Data System (ADS)
Carignano, Mauro G.; Costa-Castelló, Ramon; Roda, Vicente; Nigro, Norberto M.; Junco, Sergio; Feroldi, Diego
2017-08-01
Offering high efficiency and producing zero emissions Fuel Cells (FCs) represent an excellent alternative to internal combustion engines for powering vehicles to alleviate the growing pollution in urban environments. Due to inherent limitations of FCs which lead to slow transient response, FC-based vehicles incorporate an energy storage system to cover the fast power variations. This paper considers a FC/supercapacitor platform that configures a hard constrained powertrain providing an adverse scenario for the energy management strategy (EMS) in terms of fuel economy and drivability. Focusing on palliating this problem, this paper presents a novel EMS based on the estimation of short-term future energy demand and aiming at maintaining the state of energy of the supercapacitor between two limits, which are computed online. Such limits are designed to prevent active constraint situations of both FC and supercapacitor, avoiding the use of friction brakes and situations of non-power compliance in a short future horizon. Simulation and experimentation in a case study corresponding to a hybrid electric bus show improvements on hydrogen consumption and power compliance compared to the widely reported Equivalent Consumption Minimization Strategy. Also, the comparison with the optimal strategy via Dynamic Programming shows a room for improvement to the real-time strategies.
NASA Astrophysics Data System (ADS)
Laakso, Ilkka
2009-06-01
This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m-2 was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 °C in the whole frequency range.
Evaluating architecture impact on system energy efficiency
Yu, Shijie; Wang, Rui; Luan, Zhongzhi; Qian, Depei
2017-01-01
As the energy consumption has been surging in an unsustainable way, it is important to understand the impact of existing architecture designs from energy efficiency perspective, which is especially valuable for High Performance Computing (HPC) and datacenter environment hosting tens of thousands of servers. One obstacle hindering the advance of comprehensive evaluation on energy efficiency is the deficient power measuring approach. Most of the energy study relies on either external power meters or power models, both of these two methods contain intrinsic drawbacks in their practical adoption and measuring accuracy. Fortunately, the advent of Intel Running Average Power Limit (RAPL) interfaces has promoted the power measurement ability into next level, with higher accuracy and finer time resolution. Therefore, we argue it is the exact time to conduct an in-depth evaluation of the existing architecture designs to understand their impact on system energy efficiency. In this paper, we leverage representative benchmark suites including serial and parallel workloads from diverse domains to evaluate the architecture features such as Non Uniform Memory Access (NUMA), Simultaneous Multithreading (SMT) and Turbo Boost. The energy is tracked at subcomponent level such as Central Processing Unit (CPU) cores, uncore components and Dynamic Random-Access Memory (DRAM) through exploiting the power measurement ability exposed by RAPL. The experiments reveal non-intuitive results: 1) the mismatch between local compute and remote memory node caused by NUMA effect not only generates dramatic power and energy surge but also deteriorates the energy efficiency significantly; 2) for multithreaded application such as the Princeton Application Repository for Shared-Memory Computers (PARSEC), most of the workloads benefit a notable increase of energy efficiency using SMT, with more than 40% decline in average power consumption; 3) Turbo Boost is effective to accelerate the workload execution and further preserve the energy, however it may not be applicable on system with tight power budget. PMID:29161317
Evaluating architecture impact on system energy efficiency.
Yu, Shijie; Yang, Hailong; Wang, Rui; Luan, Zhongzhi; Qian, Depei
2017-01-01
As the energy consumption has been surging in an unsustainable way, it is important to understand the impact of existing architecture designs from energy efficiency perspective, which is especially valuable for High Performance Computing (HPC) and datacenter environment hosting tens of thousands of servers. One obstacle hindering the advance of comprehensive evaluation on energy efficiency is the deficient power measuring approach. Most of the energy study relies on either external power meters or power models, both of these two methods contain intrinsic drawbacks in their practical adoption and measuring accuracy. Fortunately, the advent of Intel Running Average Power Limit (RAPL) interfaces has promoted the power measurement ability into next level, with higher accuracy and finer time resolution. Therefore, we argue it is the exact time to conduct an in-depth evaluation of the existing architecture designs to understand their impact on system energy efficiency. In this paper, we leverage representative benchmark suites including serial and parallel workloads from diverse domains to evaluate the architecture features such as Non Uniform Memory Access (NUMA), Simultaneous Multithreading (SMT) and Turbo Boost. The energy is tracked at subcomponent level such as Central Processing Unit (CPU) cores, uncore components and Dynamic Random-Access Memory (DRAM) through exploiting the power measurement ability exposed by RAPL. The experiments reveal non-intuitive results: 1) the mismatch between local compute and remote memory node caused by NUMA effect not only generates dramatic power and energy surge but also deteriorates the energy efficiency significantly; 2) for multithreaded application such as the Princeton Application Repository for Shared-Memory Computers (PARSEC), most of the workloads benefit a notable increase of energy efficiency using SMT, with more than 40% decline in average power consumption; 3) Turbo Boost is effective to accelerate the workload execution and further preserve the energy, however it may not be applicable on system with tight power budget.
Flow structure in continuous flow electrophoresis chambers
NASA Technical Reports Server (NTRS)
Deiber, J. A.; Saville, D. A.
1982-01-01
There are at least two ways that hydrodynamic processes can limit continiuous flow electrophoresis. One arises from the sensitivity of the flow to small temerature gradients, especially at low flow rates and power levels. This sensitivity can be suppressed, at least in principle, by providing a carefully tailored, stabilizing temperature gradient in the cooling system that surrounds the flow channel. At higher power levels another limitation arises due to a restructuring of the main flow. This restructuring is caused by buoyancy, which is in turn affected by the electro-osmotic crossflow. Approximate solutions to appropriate partial differential equations have been computed by finite difference methods. One set of results is described here to illustrate the strong coupling between the structure of the main (axial) flow and the electro-osmotic flow.
An extraordinary transmission analogue for enhancing microwave antenna performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pushpakaran, Sarin V., E-mail: sarincrema@gmail.com; Purushothaman, Jayakrishnan M.; Chandroth, Aanandan
2015-10-15
The theory of diffraction limit proposed by H.A Bethe limits the total power transfer through a subwavelength hole. Researchers all over the world have gone through different techniques for boosting the transmission through subwavelength holes resulting in the Extraordinary Transmission (EOT) behavior. We examine computationally and experimentally the concept of EOT nature in the microwave range for enhancing radiation performance of a stacked dipole antenna working in the S band. It is shown that the front to back ratio of the antenna is considerably enhanced without affecting the impedance matching performance of the design. The computational analysis based on Finitemore » Difference Time Domain (FDTD) method reveals that the excitation of Fabry-Perot resonant modes on the slots is responsible for performance enhancement.« less
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Photovoltaic Inverter Controllers Seeking AC Optimal Power Flow Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.
This paper considers future distribution networks featuring inverter-interfaced photovoltaic (PV) systems, and addresses the synthesis of feedback controllers that seek real- and reactive-power inverter setpoints corresponding to AC optimal power flow (OPF) solutions. The objective is to bridge the temporal gap between long-term system optimization and real-time inverter control, and enable seamless PV-owner participation without compromising system efficiency and stability. The design of the controllers is grounded on a dual ..epsilon..-subgradient method, while semidefinite programming relaxations are advocated to bypass the non-convexity of AC OPF formulations. Global convergence of inverter output powers is analytically established for diminishing stepsize rules formore » cases where: i) computational limits dictate asynchronous updates of the controller signals, and ii) inverter reference inputs may be updated at a faster rate than the power-output settling time.« less
Facial Animations: Future Research Directions & Challenges
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.
Synthetic analog computation in living cells.
Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K
2013-05-30
A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.
Emergency navigation without an infrastructure.
Gelenbe, Erol; Bi, Huibo
2014-08-18
Emergency navigation systems for buildings and other built environments, such as sport arenas or shopping centres, typically rely on simple sensor networks to detect emergencies and, then, provide automatic signs to direct the evacuees. The major drawbacks of such static wireless sensor network (WSN)-based emergency navigation systems are the very limited computing capacity, which makes adaptivity very difficult, and the restricted battery power, due to the low cost of sensor nodes for unattended operation. If static wireless sensor networks and cloud-computing can be integrated, then intensive computations that are needed to determine optimal evacuation routes in the presence of time-varying hazards can be offloaded to the cloud, but the disadvantages of limited battery life-time at the client side, as well as the high likelihood of system malfunction during an emergency still remain. By making use of the powerful sensing ability of smart phones, which are increasingly ubiquitous, this paper presents a cloud-enabled indoor emergency navigation framework to direct evacuees in a coordinated fashion and to improve the reliability and resilience for both communication and localization. By combining social potential fields (SPF) and a cognitive packet network (CPN)-based algorithm, evacuees are guided to exits in dynamic loose clusters. Rather than relying on a conventional telecommunications infrastructure, we suggest an ad hoc cognitive packet network (AHCPN)-based protocol to adaptively search optimal communication routes between portable devices and the network egress nodes that provide access to cloud servers, in a manner that spares the remaining battery power of smart phones and minimizes the time latency. Experimental results through detailed simulations indicate that smart human motion and smart network management can increase the survival rate of evacuees and reduce the number of drained smart phones in an evacuation process.
Emergency Navigation without an Infrastructure
Gelenbe, Erol; Bi, Huibo
2014-01-01
Emergency navigation systems for buildings and other built environments, such as sport arenas or shopping centres, typically rely on simple sensor networks to detect emergencies and, then, provide automatic signs to direct the evacuees. The major drawbacks of such static wireless sensor network (WSN)-based emergency navigation systems are the very limited computing capacity, which makes adaptivity very difficult, and the restricted battery power, due to the low cost of sensor nodes for unattended operation. If static wireless sensor networks and cloud-computing can be integrated, then intensive computations that are needed to determine optimal evacuation routes in the presence of time-varying hazards can be offloaded to the cloud, but the disadvantages of limited battery life-time at the client side, as well as the high likelihood of system malfunction during an emergency still remain. By making use of the powerful sensing ability of smart phones, which are increasingly ubiquitous, this paper presents a cloud-enabled indoor emergency navigation framework to direct evacuees in a coordinated fashion and to improve the reliability and resilience for both communication and localization. By combining social potential fields (SPF) and a cognitive packet network (CPN)-based algorithm, evacuees are guided to exits in dynamic loose clusters. Rather than relying on a conventional telecommunications infrastructure, we suggest an ad hoc cognitive packet network (AHCPN)-based protocol to adaptively search optimal communication routes between portable devices and the network egress nodes that provide access to cloud servers, in a manner that spares the remaining battery power of smart phones and minimizes the time latency. Experimental results through detailed simulations indicate that smart human motion and smart network management can increase the survival rate of evacuees and reduce the number of drained smart phones in an evacuation process. PMID:25196014
Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes
NASA Astrophysics Data System (ADS)
Marvian, Milad; Lidar, Daniel A.
2017-01-01
We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.
Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.
Marvian, Milad; Lidar, Daniel A
2017-01-20
We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.
NASA Technical Reports Server (NTRS)
El-Genk, Mohamed S.; Morley, Nicholas J.
1991-01-01
Multiyear civilian manned missions to explore the surface of Mars are thought by NASA to be possible early in the next century. Expeditions to Mars, as well as permanent bases, are envisioned to require enhanced piloted vehicles to conduct science and exploration activities. Piloted rovers, with 30 kWe user net power (for drilling, sampling and sample analysis, onboard computer and computer instrumentation, vehicle thermal management, and astronaut life support systems) in addition to mobility are being considered. The rover design, for this study, included a four car train type vehicle complete with a hybrid solar photovoltaic/regenerative fuel cell auxiliary power system (APS). This system was designed to power the primary control vehicle. The APS supplies life support power for four astronauts and a limited degree of mobility allowing the primary control vehicle to limp back to either a permanent base or an accent vehicle. The results showed that the APS described above, with a mass of 667 kg, was sufficient to provide live support power and a top speed of five km/h for 6 hours per day. It was also seen that the factors that had the largest effect on the APS mass were the life support power, the number of astronauts, and the PV cell efficiency. The topics covered include: (1) power system options; (2) rover layout and design; (3) parametric analysis of total mass and power requirements for a manned Mars rover; (4) radiation shield design; and (5) energy conversion systems.
Limits on efficient computation in the physical world
NASA Astrophysics Data System (ADS)
Aaronson, Scott Joel
More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure/Shor separator"---a criterion that separates the already-verified quantum states from those that appear in Shor's factoring algorithm. I argue that such a separator should be based on a complexity classification of quantum states, and go on to create such a classification. Next I ask what happens to the quantum computing model if we take into account that the speed of light is finite---and in particular, whether Grover's algorithm still yields a quadratic speedup for searching a database. Refuting a claim by Benioff, I show that the surprising answer is yes. Finally, I analyze hypothetical models of computation that go even beyond quantum computing. I show that many such models would be as powerful as the complexity class PP, and use this fact to give a simple, quantum computing based proof that PP is closed under intersection. On the other hand, I also present one model---wherein we could sample the entire history of a hidden variable---that appears to be more powerful than standard quantum computing, but only slightly so.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
NASA Astrophysics Data System (ADS)
Pan, Bing; Wang, Bo
2017-10-01
Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.
Recent Developments in the Application of Biologically Inspired Computation to Chemical Sensing
NASA Astrophysics Data System (ADS)
Marco, S.; Gutierrez-Gálvez, A.
2009-05-01
Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. In this work, the state of the art concerning biologically inspired computation for chemical sensing will be reviewed. Instead of reviewing the whole body of computational neuroscience of olfaction, we restrict this review to the application of models to the processing of real chemical sensor data.
Ravicz, Michael E.; Rosowski, John J.
2012-01-01
The middle-ear input admittance relates sound power into the middle ear (ME) and sound pressure at the tympanic membrane (TM). ME input admittance was measured in the chinchilla ear canal as part of a larger study of sound power transmission through the ME into the inner ear. The middle ear was open, and the inner ear was intact or modified with small sensors inserted into the vestibule near the cochlear base. A simple model of the chinchilla ear canal, based on ear canal sound pressure measurements at two points along the canal and an assumption of plane-wave propagation, enables reliable estimates of YTM, the ME input admittance at the TM, from the admittance measured relatively far from the TM. YTM appears valid at frequencies as high as 17 kHz, a much higher frequency than previously reported. The real part of YTM decreases with frequency above 2 kHz. Effects of the inner-ear sensors (necessary for inner ear power computation) were small and generally limited to frequencies below 3 kHz. Computed power reflectance was ∼0.1 below 3.5 kHz, lower than with an intact ME below 2.5 kHz, and nearly 1 above 16 kHz. PMID:23039439
Energy-efficient quantum computing
NASA Astrophysics Data System (ADS)
Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko
2017-04-01
In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.
Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taherzadeh, M.
1987-11-13
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation andmore » source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.« less
Molecular Dynamics Simulations of Nucleic Acids. From Tetranucleotides to the Ribosome.
Šponer, Jiří; Banáš, Pavel; Jurečka, Petr; Zgarbová, Marie; Kührová, Petra; Havrila, Marek; Krepl, Miroslav; Stadlbauer, Petr; Otyepka, Michal
2014-05-15
We present a brief overview of explicit solvent molecular dynamics (MD) simulations of nucleic acids. We explain physical chemistry limitations of the simulations, namely, the molecular mechanics (MM) force field (FF) approximation and limited time scale. Further, we discuss relations and differences between simulations and experiments, compare standard and enhanced sampling simulations, discuss the role of starting structures, comment on different versions of nucleic acid FFs, and relate MM computations with contemporary quantum chemistry. Despite its limitations, we show that MD is a powerful technique for studying the structural dynamics of nucleic acids with a fast growing potential that substantially complements experimental results and aids their interpretation.
A Computational Procedure for the Protection of Industrial Power Systems.
1981-11-01
Sumary ................................. ... 79 9 . CONCLUSIONS ....... ..................... ... 80 Recomnendations for Further Work ... ........... ... 81...34- - : "’< ’"’-......................"..-..-..--......-..... 9 Fuses. Two types of fuses are used in the coordination pro- gram. These are current-limiting and...NUMONICS Electronic Graphics Calculator [ 9 ]. The NUMONICS EGC then provided a convenient way to enter X-Y coordinate data representing any curve onto disk
Rotary engine performance limits predicted by a zero-dimensional model
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1992-01-01
A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.
Theoretical Heterogeneous Catalysis: Scaling Relationships and Computational Catalyst Design.
Greeley, Jeffrey
2016-06-07
Scaling relationships are theoretical constructs that relate the binding energies of a wide variety of catalytic intermediates across a range of catalyst surfaces. Such relationships are ultimately derived from bond order conservation principles that were first introduced several decades ago. Through the growing power of computational surface science and catalysis, these concepts and their applications have recently begun to have a major impact in studies of catalytic reactivity and heterogeneous catalyst design. In this review, the detailed theory behind scaling relationships is discussed, and the existence of these relationships for catalytic materials ranging from pure metal to oxide surfaces, for numerous classes of molecules, and for a variety of catalytic surface structures is described. The use of the relationships to understand and elucidate reactivity trends across wide classes of catalytic surfaces and, in some cases, to predict optimal catalysts for certain chemical reactions, is explored. Finally, the observation that, in spite of the tremendous power of scaling relationships, their very existence places limits on the maximum rates that may be obtained for the catalyst classes in question is discussed, and promising strategies are explored to overcome these limitations to usher in a new era of theory-driven catalyst design.
Simplifying Chandra aperture photometry with srcflux
NASA Astrophysics Data System (ADS)
Glotfelty, Kenny
2014-11-01
This poster will highlight some of the features of the srcflux script in CIAO. This script combines many threads and tools together to compute photometric properties for sources: counts, rates, various fluxes, and confidence intervals or upper limits. Beginning and casual X-ray astronomers greatly benefit from the simple interface: just specify the event file and a celestial location, while power-users and X-ray astronomy experts can take advantage of the all the parameters to automatically produce catalogs for entire fields. Current limitations and future enhancements of the script will also be presented.
THE EINSTEIN-HOME SEARCH FOR RADIO PULSARS AND PSR J2007+2722 DISCOVERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, B.; Knispel, B.; Aulbert, C.
Einstein-Home aggregates the computer power of hundreds of thousands of volunteers from 193 countries, to search for new neutron stars using data from electromagnetic and gravitational-wave detectors. This paper presents a detailed description of the search for new radio pulsars using Pulsar ALFA survey data from the Arecibo Observatory. The enormous computing power allows this search to cover a new region of parameter space; it can detect pulsars in binary systems with orbital periods as short as 11 minutes. We also describe the first Einstein-Home discovery, the 40.8 Hz isolated pulsar PSR J2007+2722, and provide a full timing model. PSRmore » J2007+2722's pulse profile is remarkably wide with emission over almost the entire spin period. This neutron star is most likely a disrupted recycled pulsar, about as old as its characteristic spin-down age of 404 Myr. However, there is a small chance that it was born recently, with a low magnetic field. If so, upper limits on the X-ray flux suggest but cannot prove that PSR J2007+2722 is at least {approx}100 kyr old. In the future, we expect that the massive computing power provided by volunteers should enable many additional radio pulsar discoveries.« less
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-02-05
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D
2012-10-23
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Cosmic microwave background constraints for global strings and global monopoles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Lizarraga, Joanes; Urrestilla, Jon
We present the first cosmic microwave background (CMB) power spectra from numerical simulations of the global O( N ) linear σ-model, with N =2,3, which have global strings and monopoles as topological defects. In order to compute the CMB power spectra we compute the unequal time correlators (UETCs) of the energy-momentum tensor, showing that they fall off at high wave number faster than naive estimates based on the geometry of the defects, indicating non-trivial (anti-)correlations between the defects and the surrounding Goldstone boson field. We obtain source functions for Einstein-Boltzmann solvers from the UETCs, using a recently developed method thatmore » improves the modelling at the radiation-matter transition. We show that the interpolation function that mimics the transition is similar to other defect models, but not identical, confirming the non-universality of the interpolation function. The CMB power spectra for global strings and global monopoles have the same overall shape as those obtained using the non-linear σ-model approximation, which is well captured by a large- N calculation. However, the amplitudes are larger than the large- N calculation would naively predict, and in the case of global strings much larger: a factor of 20 at the peak. Finally we compare the CMB power spectra with the latest CMB data in other to put limits on the allowed contribution to the temperature power spectrum at multipole l = 10 of 1.7% for global strings and 2.4% for global monopoles. These limits correspond to symmetry-breaking scales of 2.9× 10{sup 15} GeV (6.3× 10{sup 14} GeV with the expected logarithmic scaling of the effective string tension between the simulation time and decoupling) and 6.4× 10{sup 15} GeV respectively. The bound on global strings is a significant one for the ultra-light axion scenario with axion masses m {sub a} ∼< 10{sup −28} eV . These upper limits indicate that gravitational waves from global topological defects will not be observable at the gravitational wave observatory LISA.« less
NASA Technical Reports Server (NTRS)
Evans, Austin Lewis
1988-01-01
The paper presents a computer program developed to model the steady-state performance of the tapered artery heat pipe for use in the radiator of the solar dynamic power system of the NASA Space Station. The program solves six governing equations to ascertain which one is limiting the maximum heat transfer rate of the heat pipe. The present model appeared to be slightly better than the LTV model in matching the 1-g data for the standard 15-ft test heat pipe.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
Computer-Aided Analysis of Patents for Product Technology Maturity Forecasting
NASA Astrophysics Data System (ADS)
Liang, Yanhong; Gan, Dequan; Guo, Yingchun; Zhang, Peng
Product technology maturity foresting is vital for any enterprises to hold the chance for innovation and keep competitive for a long term. The Theory of Invention Problem Solving (TRIZ) is acknowledged both as a systematic methodology for innovation and a powerful tool for technology forecasting. Based on TRIZ, the state -of-the-art on the technology maturity of product and the limits of application are discussed. With the application of text mining and patent analysis technologies, this paper proposes a computer-aided approach for product technology maturity forecasting. It can overcome the shortcomings of the current methods.
Smart Computer-Assisted Markets
NASA Astrophysics Data System (ADS)
McCabe, Kevin A.; Rassenti, Stephen J.; Smith, Vernon L.
1991-10-01
The deregulation movement has motivated the experimental study of auction markets designed for interdependent network industries such as natural gas pipelines or electric power systems. Decentralized agents submit bids to buy commodity and offers to sell transportation and commodity to a computerized dispatch center. Computer algorithms determine prices and allocations that maximize the gains from exchange in the system relative to the submitted bids and offers. The problem is important, because traditionally the scale and coordination economies in such industries were thought to require regulation. Laboratory experiments are used to study feasibility, limitations, incentives, and performance of proposed market designs for deregulation, providing motivation for new theory.
Fuel cells provide a revenue-generating solution to power quality problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, J.M. Jr.
Electric power quality and reliability are becoming increasingly important as computers and microprocessors assume a larger role in commercial, health care and industrial buildings and processes. At the same time, constraints on transmission and distribution of power from central stations are making local areas vulnerable to low voltage, load addition limitations, power quality and power reliability problems. Many customers currently utilize some form of premium power in the form of standby generators and/or UPS systems. These include customers where continuous power is required because of health and safety or security reasons (hospitals, nursing homes, places of public assembly, air trafficmore » control, military installations, telecommunications, etc.) These also include customers with industrial or commercial processes which can`t tolerance an interruption of power because of product loss or equipment damage. The paper discusses the use of the PC25 fuel cell power plant for backup and parallel power supplies for critical industrial applications. Several PC25 installations are described: the use of propane in a PC25; the use by rural cooperatives; and a demonstration of PC25 technology using landfill gas.« less
A-priori testing of sub-grid models for chemically reacting nonpremixed turbulent shear flows
NASA Technical Reports Server (NTRS)
Jimenez, J.; Linan, A.; Rogers, M. M.; Higuera, F. J.
1996-01-01
The beta-assumed-pdf approximation of (Cook & Riley 1994) is tested as a subgrid model for the LES computation of nonpremixed turbulent reacting flows, in the limit of cold infinitely fast chemistry, for two plane turbulent mixing layers with different degrees of intermittency. Excellent results are obtained for the computation of integrals properties such as product mass fraction, and the model is applied to other quantities such as powers of the temperature and the pdf of the scalar itself. Even in these cases the errors are small enough to be useful in practical applications. The analysis is extended to slightly out of equilibrium problems such as the generation of radicals, and formulated in terms of the pdf of the scalar gradients. It is shown that the conditional gradient distribution is universal in a wide range of cases whose limits are established. Within those limits, engineering approximations to the radical concentration are also possible. It is argued that the experiments in this paper are essentially in the limit of infinite Reynolds number.
From photons to big-data applications: terminating terabits
2016-01-01
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Song
CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less
From photons to big-data applications: terminating terabits.
Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A
2016-03-06
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.
Microelectromechanical reprogrammable logic device.
Hafiz, M A A; Kosuru, L; Younis, M I
2016-03-29
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.
Microelectromechanical reprogrammable logic device
Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.
2016-01-01
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295
Integration of nanoscale memristor synapses in neuromorphic computing architectures
NASA Astrophysics Data System (ADS)
Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis
2013-09-01
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.
On Shaft Data Acquisition System (OSDAS)
NASA Technical Reports Server (NTRS)
Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles
2012-01-01
On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test article itself, packaged in a compact/rugged form factor, consuming limited power, all while rotating at high turbine speeds.
Wash load and bed-material load transport in the Yellow River
Yang, C.T.; Simoes, F.J.M.
2005-01-01
It has been the conventional assumption that wash load is supply limited and is only indirectly related to the hydraulics of a river. Hydraulic engineers also assumed that bed-material load concentration is independent of wash load concentration. This paper provides a detailed analysis of the Yellow River sediment transport data to determine whether the above assumptions are true and whether wash load concentration can be computed from the original unit stream power formula and the modified unit stream power formula for sediment-laden flows. A systematic and thorough analysis of 1,160 sets of data collected from 9 gauging stations along the Middle and Lower Yellow River confirmed that the method suggested by the conjunctive use of the two formulas can be used to compute wash load, bed-material load, and total load in the Yellow River with accuracy. Journal of Hydraulic Engineering ?? ASCE.
Abbey, Craig K.; Zemp, Roger J.; Liu, Jie; Lindfors, Karen K.; Insana, Michael F.
2009-01-01
We investigate and extend the ideal observer methodology developed by Smith and Wagner to detection and discrimination tasks related to breast sonography. We provide a numerical approach for evaluating the ideal observer acting on radio-frequency (RF) frame data, which involves inversion of large nonstationary covariance matrices, and we describe a power-series approach to computing this inverse. Considering a truncated power series suggests that the RF data be Wiener-filtered before forming the final envelope image. We have compared human performance for Wiener-filtered and conventional B-mode envelope images using psychophysical studies for 5 tasks related to breast cancer classification. We find significant improvements in visual detection and discrimination efficiency in four of these five tasks. We also use the Smith-Wagner approach to distinguish between human and processing inefficiencies, and find that generally the principle limitation comes from the information lost in computing the final envelope image. PMID:16468454
Perturbation analysis of the limit cycle of the free van der Pol equation
NASA Technical Reports Server (NTRS)
Dadfar, M. B.; Geer, J.; Anderson, C. M.
1983-01-01
A power series expansion in the damping parameter, epsilon, of the limit cycle of the free van der Pol equation is constructed and analyzed. Coefficients in the expansion are computed in exact rational arithmetic using the symbolic manipulation system MACSYMA and using a FORTRAN program. The series is analyzed using Pade approximants. The convergence of the series for the maximum amplitude of the limit cycle is limited by two pair of complex conjugate singularities in the complex epsilon-plane. A new expansion parameter is introduced which maps these singularities to infinity and leads to a new expansion for the amplitude which converges for all real values of epsilon. Amplitudes computed from this transformed series agree very well with reported numerical and asymptotic results. For the limit cycle itself, convergence of the series expansion is limited by three pair of complex conjugate branch point singularities. Two pair remain fixed throughout the cycle, and correspond to the singularities found in the maximum amplitude series, while the third pair moves in the epsilon-plane as a function of t from one of the fixed pairs to the other. The limit cycle series is transformed using a new expansion parameter, which leads to a new series that converges for larger values of epsilon.
Damle, Kedar; Majumdar, Satya N; Tripathi, Vikram; Vivo, Pierpaolo
2011-10-21
We compute analytically the full distribution of Andreev conductance G(NS) of a metal-superconductor interface with a large number N(c) of transverse modes, using a random matrix approach. The probability distribution P(G(NS),N(c) in the limit of large N(c) displays a Gaussian behavior near the average value
NASA Technical Reports Server (NTRS)
1975-01-01
Major developments are examined which have taken place to date in the analysis of the power and energy demands on the APU/Hydraulic/Actuator Subsystem for space shuttle during the entry-to-touchdown (not including rollout) flight regime. These developments are given in the form of two subroutines which were written for use with the Space Shuttle Functional Simulator. The first subroutine calculates the power and energy demand on each of the three hydraulic systems due to control surface (inboard/outboard elevons, rudder, speedbrake, and body flap) activity. The second subroutine incorporates the R. I. priority rate limiting logic which limits control surface deflection rates as a function of the number of failed hydraulic. Typical results of this analysis are included, and listings of the subroutines are presented in appendicies.
NASA Astrophysics Data System (ADS)
Singh, Avneet; Papa, Maria Alessandra; Eggenstein, Heinz-Bernd; Zhu, Sylvia; Pletsch, Holger; Allen, Bruce; Bock, Oliver; Maschenchalk, Bernd; Prix, Reinhard; Siemens, Xavier
2016-09-01
We present results of a high-frequency all-sky search for continuous gravitational waves from isolated compact objects in LIGO's fifth science run (S5) data, using the computing power of the Einstein@Home volunteer computing project. This is the only dedicated continuous gravitational wave search that probes this high-frequency range on S5 data. We find no significant candidate signal, so we set 90% confidence level upper limits on continuous gravitational wave strain amplitudes. At the lower end of the search frequency range, around 1250 Hz, the most constraining upper limit is 5.0 ×10-24, while at the higher end, around 1500 Hz, it is 6.2 ×10-24. Based on these upper limits, and assuming a fiducial value of the principal moment of inertia of 1038 kg m2 , we can exclude objects with ellipticities higher than roughly 2.8 ×10-7 within 100 pc of Earth with rotation periods between 1.3 and 1.6 milliseconds.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Superadiabatic Controlled Evolutions and Universal Quantum Computation.
Santos, Alan C; Sarandy, Marcelo S
2015-10-29
Adiabatic state engineering is a powerful technique in quantum information and quantum control. However, its performance is limited by the adiabatic theorem of quantum mechanics. In this scenario, shortcuts to adiabaticity, such as provided by the superadiabatic theory, constitute a valuable tool to speed up the adiabatic quantum behavior. Here, we propose a superadiabatic route to implement universal quantum computation. Our method is based on the realization of piecewise controlled superadiabatic evolutions. Remarkably, they can be obtained by simple time-independent counter-diabatic Hamiltonians. In particular, we discuss the implementation of fast rotation gates and arbitrary n-qubit controlled gates, which can be used to design different sets of universal quantum gates. Concerning the energy cost of the superadiabatic implementation, we show that it is dictated by the quantum speed limit, providing an upper bound for the corresponding adiabatic counterparts.
Superadiabatic Controlled Evolutions and Universal Quantum Computation
Santos, Alan C.; Sarandy, Marcelo S.
2015-01-01
Adiabatic state engineering is a powerful technique in quantum information and quantum control. However, its performance is limited by the adiabatic theorem of quantum mechanics. In this scenario, shortcuts to adiabaticity, such as provided by the superadiabatic theory, constitute a valuable tool to speed up the adiabatic quantum behavior. Here, we propose a superadiabatic route to implement universal quantum computation. Our method is based on the realization of piecewise controlled superadiabatic evolutions. Remarkably, they can be obtained by simple time-independent counter-diabatic Hamiltonians. In particular, we discuss the implementation of fast rotation gates and arbitrary n-qubit controlled gates, which can be used to design different sets of universal quantum gates. Concerning the energy cost of the superadiabatic implementation, we show that it is dictated by the quantum speed limit, providing an upper bound for the corresponding adiabatic counterparts. PMID:26511064
Reconditioning of Batteries on the International Space Station
NASA Technical Reports Server (NTRS)
Hajela, Gyan; Cohen, Fred; Dalton, Penni
2004-01-01
Primary source of electric power for the International Space Station (ISS) is the photovoltaic module (PVM). At assembly complete stage, the ISS will be served by 4 PVMs. Each PVM contains two independent power channels such that one failure will result in loss of only one power channel. During early stages of assembly, the ISS is served by only one PVM designated as P6. Solar arrays are used to convert solar flux into electrical power. Nickel hydrogen batteries are used to store electrical power for use during periods when the solar input is not adequate to support channel loads. Batteries are operated per established procedures that ensure that they are maintained within specified temperature limits, charge current is controlled to conform to a specified charge profile, and battery voltages are maintained within specified limits. Both power channels on the PVM P6 have been operating flawlessly since December 2000 with 100 percent power availability. All components, including batteries, are monitored regularly to ensure that they are operating within specified limits and to trend their wear out and age effects. The paper briefly describes the battery trend data. Batteries have started to show some effects of aging and a battery reconditioning procedure is being evaluated at this time. Reconditioning is expected to reduce cell voltage divergence and provide data that can be used to update the state of charge (SOC) computation in the software to account for battery age. During reconditioning, each battery, one at a time, will be discharged per a specified procedure and then returned to a full state of charge. The paper describes the reconditioning procedure and the expected benefits. The reconditioning procedures have been thoroughly coordinated by all affected technical teams and approved by all required boards. The reconditioning is tentatively scheduled for September 2004.
Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei
2014-07-20
We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elbert, Stephen T.; Kalsi, Karanjit; Vlachopoulou, Maria
Financial Transmission Rights (FTRs) help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, a novel non-linear dynamical system (NDS) approach is proposed tomore » solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on large-scale systems using data from the Western Electricity Coordinating Council (WECC). The NDS is demonstrated to outperform the widely used CPLEX algorithms while exhibiting superior scalability. Furthermore, the NDS based solver can be easily parallelized which results in significant computational improvement.« less
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
NASA Technical Reports Server (NTRS)
Goldstein, M. L.
1976-01-01
The propagation of charged particles through interstellar and interplanetary space has often been described as a random process in which the particles are scattered by ambient electromagnetic turbulence. In general, this changes both the magnitude and direction of the particles' momentum. Some situations for which scattering in direction (pitch angle) is of primary interest were studied. A perturbed orbit, resonant scattering theory for pitch-angle diffusion in magnetostatic turbulence was slightly generalized and then utilized to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field, Kappa. All divergences inherent in the quasilinear formalism when the power spectrum of the fluctuation field falls off as K to the minus Q power (Q less than 2) were removed. Various methods of computing Kappa were compared and limits on the validity of the theory discussed. For Q less than 1 or 2, the various methods give roughly comparable values of Kappa, but use of perturbed orbits systematically results in a somewhat smaller Kappa than can be obtained from quasilinear theory.
Data collapse and critical dynamics in neuronal avalanche data
NASA Astrophysics Data System (ADS)
Butler, Thomas; Friedman, Nir; Dahmen, Karin; Beggs, John; Deville, Lee; Ito, Shinya
2012-02-01
The tasks of information processing, computation, and response to stimuli require neural computation to be remarkably flexible and diverse. To optimally satisfy the demands of neural computation, neuronal networks have been hypothesized to operate near a non-equilibrium critical point. In spite of their importance for neural dynamics, experimental evidence for critical dynamics has been primarily limited to power law statistics that can also emerge from non-critical mechanisms. By tracking the firing of large numbers of synaptically connected cortical neurons and comparing the resulting data to the predictions of critical phenomena, we show that cortical tissues in vitro can function near criticality. Among the most striking predictions of critical dynamics is that the mean temporal profiles of avalanches of widely varying durations are quantitatively described by a single universal scaling function (data collapse). We show for the first time that this prediction is confirmed in neuronal networks. We also show that the data have three additional features predicted by critical phenomena: approximate power law distributions of avalanche sizes and durations, samples in subcritical and supercritical phases, and scaling laws between anomalous exponents.
Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing
NASA Astrophysics Data System (ADS)
Shi, X.
2017-10-01
Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
MBus: An Ultra-Low Power Interconnect Bus for Next Generation Nanopower Systems
Pannuto, Pat; Lee, Yoonmyung; Kuo, Ye-Sheng; Foo, ZhiYoong; Kempke, Benjamin; Kim, Gyouho; Dreslinski, Ronald G.; Blaauw, David; Dutta, Prabal
2015-01-01
As we show in this paper, I/O has become the limiting factor in scaling down size and power toward the goal of invisible computing. Achieving this goal will require composing optimized and specialized—yet reusable—components with an interconnect that permits tiny, ultra-low power systems. In contrast to today’s interconnects which are limited by power-hungry pull-ups or high-overhead chip-select lines, our approach provides a superset of common bus features but at lower power, with fixed area and pin count, using fully synthesizable logic, and with surprisingly low protocol overhead. We present MBus, a new 4-pin, 22.6 pJ/bit/chip chip-to-chip interconnect made of two “shoot-through” rings. MBus facilitates ultra-low power system operation by implementing automatic power-gating of each chip in the system, easing the integration of active, inactive, and activating circuits on a single die. In addition, we introduce a new bus primitive: power oblivious communication, which guarantees message reception regardless of the recipient’s power state when a message is sent. This disentangles power management from communication, greatly simplifying the creation of viable, modular, and heterogeneous systems that operate on the order of nanowatts. To evaluate the viability, power, performance, overhead, and scalability of our design, we build both hardware and software implementations of MBus and show its seamless operation across two FPGAs and twelve custom chips from three different semiconductor processes. A three-chip, 2.2 mm3 MBus system draws 8 nW of total system standby power and uses only 22.6 pJ/bit/chip for communication. This is the lowest power for any system bus with MBus’s feature set. PMID:26855555
MBus: An Ultra-Low Power Interconnect Bus for Next Generation Nanopower Systems.
Pannuto, Pat; Lee, Yoonmyung; Kuo, Ye-Sheng; Foo, ZhiYoong; Kempke, Benjamin; Kim, Gyouho; Dreslinski, Ronald G; Blaauw, David; Dutta, Prabal
2015-06-01
As we show in this paper, I/O has become the limiting factor in scaling down size and power toward the goal of invisible computing. Achieving this goal will require composing optimized and specialized-yet reusable-components with an interconnect that permits tiny, ultra-low power systems. In contrast to today's interconnects which are limited by power-hungry pull-ups or high-overhead chip-select lines, our approach provides a superset of common bus features but at lower power, with fixed area and pin count, using fully synthesizable logic, and with surprisingly low protocol overhead. We present MBus , a new 4-pin, 22.6 pJ/bit/chip chip-to-chip interconnect made of two "shoot-through" rings. MBus facilitates ultra-low power system operation by implementing automatic power-gating of each chip in the system, easing the integration of active, inactive, and activating circuits on a single die. In addition, we introduce a new bus primitive: power oblivious communication, which guarantees message reception regardless of the recipient's power state when a message is sent. This disentangles power management from communication, greatly simplifying the creation of viable, modular, and heterogeneous systems that operate on the order of nanowatts. To evaluate the viability, power, performance, overhead, and scalability of our design, we build both hardware and software implementations of MBus and show its seamless operation across two FPGAs and twelve custom chips from three different semiconductor processes. A three-chip, 2.2 mm 3 MBus system draws 8 nW of total system standby power and uses only 22.6 pJ/bit/chip for communication. This is the lowest power for any system bus with MBus's feature set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.
1997-12-01
This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less
GPU accelerated FDTD solver and its application in MRI.
Chi, J; Liu, F; Jin, J; Mason, D G; Crozier, S
2010-01-01
The finite difference time domain (FDTD) method is a popular technique for computational electromagnetics (CEM). The large computational power often required, however, has been a limiting factor for its applications. In this paper, we will present a graphics processing unit (GPU)-based parallel FDTD solver and its successful application to the investigation of a novel B1 shimming scheme for high-field magnetic resonance imaging (MRI). The optimized shimming scheme exhibits considerably improved transmit B(1) profiles. The GPU implementation dramatically shortened the runtime of FDTD simulation of electromagnetic field compared with its CPU counterpart. The acceleration in runtime has made such investigation possible, and will pave the way for other studies of large-scale computational electromagnetic problems in modern MRI which were previously impractical.
Parametric study of microwave-powered high-altitude airplane platforms designed for linear flight
NASA Technical Reports Server (NTRS)
Morris, C. E. K., Jr.
1981-01-01
The performance of a class of remotely piloted, microwave powered, high altitude airplane platforms is studied. The first part of each cycle of the flight profile consists of climb while the vehicle is tracked and powered by a microwave beam; this is followed by gliding flight back to a minimum altitude above a microwave station and initiation of another cycle. Parametric variations were used to define the effects of changes in the characteristics of the airplane aerodynamics, the energy transmission systems, the propulsion system, and winds. Results show that wind effects limit the reduction of wing loading and the increase of lift coefficient, two effective ways to obtain longer range and endurance for each flight cycle. Calculated climb performance showed strong sensitivity to some power and propulsion parameters. A simplified method of computing gliding endurance was developed.
NASA Astrophysics Data System (ADS)
Simmons, Michelle
2016-05-01
Down-scaling has been the leading paradigm of the semiconductor industry since the invention of the first transistor in 1947. However miniaturization will soon reach the ultimate limit, set by the discreteness of matter, leading to intensified research in alternative approaches for creating logic devices. This talk will discuss the development of a radical new technology for creating atomic-scale devices which is opening a new frontier of research in electronics globally. We will introduce single atom transistors where we can measure both the charge and spin of individual dopants with unique capabilities in controlling the quantum world. To this end, we will discuss how we are now demonstrating atom by atom, the best way to build a quantum computer - a new type of computer that exploits the laws of physics at very small dimensions in order to provide an exponential speed up in computational processing power.
Prediction of dry ice mass for firefighting robot actuation
NASA Astrophysics Data System (ADS)
Ajala, M. T.; Khan, Md R.; Shafie, A. A.; Salami, MJE; Mohamad Nor, M. I.
2017-11-01
The limitation in the performance of electric actuated firefighting robots in high-temperature fire environment has led to research on the alternative propulsion system for the mobility of firefighting robots in such environment. Capitalizing on the limitations of these electric actuators we suggested a gas-actuated propulsion system in our earlier study. The propulsion system is made up of a pneumatic motor as the actuator (for the robot) and carbon dioxide gas (self-generated from dry ice) as the power source. To satisfy the consumption requirement (9cfm) of the motor for efficient actuation of the robot in the fire environment, the volume of carbon dioxide gas, as well as the corresponding mass of the dry ice that will produce the required volume for powering and actuation of the robot, must be determined. This article, therefore, presents the computational analysis to predict the volumetric requirement and the dry ice mass sufficient to power a carbon dioxide gas propelled autonomous firefighting robot in a high-temperature environment. The governing equation of the sublimation of dry ice to carbon dioxide is established. An operating time of 2105.53s and operating pressure ranges from 137.9kPa to 482.65kPa were achieved following the consumption rate of the motor. Thus, 8.85m3 is computed as the volume requirement of the CAFFR while the corresponding dry ice mass for the CAFFR actuation ranges from 21.67kg to 75.83kg depending on the operating pressure.
A Framework for Model-Based Diagnostics and Prognostics of Switched-Mode Power Supplies
2014-10-02
system. Some highlights of the work are included but not only limited to the following aspects: first, the methodology is based on electronic ... electronic health management, with the goal of expanding the realm of electronic diagnostics and prognostics. 1. INTRODUCTION Electronic systems such...as electronic controls, onboard computers, communications, navigation and radar perform many critical functions in onboard military and commercial
Microprocessor-Based Systems Control for the Rigidized Inflatable Get-Away-Special Experiment
2004-03-01
communications and faster data throughput increase, satellites are becoming larger. Larger satellite antennas help to provide the needed gain to...increase communications in space. Compounding the performance and size trade-offs are the payload weight and size limit imposed by the launch vehicles...increased communications capacity, and reduce launch costs. This thesis develops and implements the computer control system and power system to
Boolean and brain-inspired computing using spin-transfer torque devices
NASA Astrophysics Data System (ADS)
Fan, Deliang
Several completely new approaches (such as spintronic, carbon nanotube, graphene, TFETs, etc.) to information processing and data storage technologies are emerging to address the time frame beyond current Complementary Metal-Oxide-Semiconductor (CMOS) roadmap. The high speed magnetization switching of a nano-magnet due to current induced spin-transfer torque (STT) have been demonstrated in recent experiments. Such STT devices can be explored in compact, low power memory and logic design. In order to truly leverage STT devices based computing, researchers require a re-think of circuit, architecture, and computing model, since the STT devices are unlikely to be drop-in replacements for CMOS. The potential of STT devices based computing will be best realized by considering new computing models that are inherently suited to the characteristics of STT devices, and new applications that are enabled by their unique capabilities, thereby attaining performance that CMOS cannot achieve. The goal of this research is to conduct synergistic exploration in architecture, circuit and device levels for Boolean and brain-inspired computing using nanoscale STT devices. Specifically, we first show that the non-volatile STT devices can be used in designing configurable Boolean logic blocks. We propose a spin-memristor threshold logic (SMTL) gate design, where memristive cross-bar array is used to perform current mode summation of binary inputs and the low power current mode spintronic threshold device carries out the energy efficient threshold operation. Next, for brain-inspired computing, we have exploited different spin-transfer torque device structures that can implement the hard-limiting and soft-limiting artificial neuron transfer functions respectively. We apply such STT based neuron (or 'spin-neuron') in various neural network architectures, such as hierarchical temporal memory and feed-forward neural network, for performing "human-like" cognitive computing, which show more than two orders of lower energy consumption compared to state of the art CMOS implementation. Finally, we show the dynamics of injection locked Spin Hall Effect Spin-Torque Oscillator (SHE-STO) cluster can be exploited as a robust multi-dimensional distance metric for associative computing, image/ video analysis, etc. Our simulation results show that the proposed system architecture with injection locked SHE-STOs and the associated CMOS interface circuits can be suitable for robust and energy efficient associative computing and pattern matching.
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Symbolic programming language in molecular multicenter integral problem
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Bouferguene, Ahmed
It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.
Ubiquitous Presenter: A Tablet PC-based System to Support Instructors and Students
NASA Astrophysics Data System (ADS)
Price, Edward; Simon, Beth
2009-12-01
Digital lecturing systems (computer and projector, often with PowerPoint) offer physics instructors the ability to incorporate graphics and the power to share and reuse materials. But these systems do a poor job of supporting interaction in the classroom. For instance, with digital presentation systems, instructors have limited ability to spontaneously respond to student questions. This limitation is especially acute during classroom activities such as problem solving, Peer Instruction, and Interactive Lecture Demonstrations (ILDs).2 A Tablet PC, a laptop computer with a stylus that can be used to "write" on the screen, provides a way for instructors to add digital ink spontaneously to a presentation in progress. The Tablet PC can be a powerful tool for teaching,3,4 especially when combined with software systems specifically designed to leverage digital ink for pedagogical uses. Ubiquitous Presenter (UP) is one such freely available system.5 Developed at the University of California, San Diego, and based on Classroom Presenter,6 UP allows the instructor to ink prepared digital material (such as exported PowerPoint slides) in real time in class. Ink is automatically archived stroke by stroke and can be reviewed through a web browser (by both students and instructors). The system also supports spontaneous in-class interaction through a web interface—students with web-enabled devices (Tablet PCs, regular laptops, PDAs, and cell phones) can make text-, ink-, or image-based submissions on the instructor's slides. The instructor can review and then project submitted slides to the class and add additional ink, so that material generated by students can be a focus for discussion. A brief video showing UP in action is at http://physics.csusm.edu/UP. In this article, we describe UP and give examples of how UP can support the physics classroom.
Intelligent self-organization methods for wireless ad hoc sensor networks based on limited resources
NASA Astrophysics Data System (ADS)
Hortos, William S.
2006-05-01
A wireless ad hoc sensor network (WSN) is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. To a greater degree than the terminals found in mobile ad hoc networks (MANETs) for communications, sensor nodes are resource-constrained, with limited computational processing, bandwidth, memory, and power, and are typically unattended once in operation. Consequently, the level of information exchange among nodes, to support any complex adaptive algorithms to establish network connectivity and optimize throughput, not only deplete those limited resources and creates high overhead in narrowband communications, but also increase network vulnerability to eavesdropping by malicious nodes. Cooperation among nodes, critical to the mission of sensor networks, can thus be disrupted by the inappropriate choice of the method for self-organization. Recent published contributions to the self-configuration of ad hoc sensor networks, e.g., self-organizing mapping and swarm intelligence techniques, have been based on the adaptive control of the cross-layer interactions found in MANET protocols to achieve one or more performance objectives: connectivity, intrusion resistance, power control, throughput, and delay. However, few studies have examined the performance of these algorithms when implemented with the limited resources of WSNs. In this paper, self-organization algorithms for the initiation, operation and maintenance of a network topology from a collection of wireless sensor nodes are proposed that improve the performance metrics significant to WSNs. The intelligent algorithm approach emphasizes low computational complexity, energy efficiency and robust adaptation to change, allowing distributed implementation with the actual limited resources of the cooperative nodes of the network. Extensions of the algorithms from flat topologies to two-tier hierarchies of sensor nodes are presented. Results from a few simulations of the proposed algorithms are compared to the published results of other approaches to sensor network self-organization in common scenarios. The estimated network lifetime and extent under static resource allocations are computed.
The limits of predictability: Indeterminism and undecidability in classical and quantum physics
NASA Astrophysics Data System (ADS)
Korolev, Alexandre V.
This thesis is a collection of three case studies, investigating various sources of indeterminism and undecidability as they bear upon in principle unpredictability of the behaviour of mechanistic systems in both classical and quantum physics. I begin by examining the sources of indeterminism and acausality in classical physics. Here I discuss the physical significance of an often overlooked and yet important Lipschitz condition, the violation of which underlies the existence of anomalous non-trivial solutions in the Norton-type indeterministic systems. I argue that the singularity arising from the violation of the Lipschitz condition in the systems considered appears to be so fragile as to be easily destroyed by slightly relaxing certain (infinite) idealizations required by these models. In particular, I show that the idealization of an absolutely nondeformable, or infinitely rigid, dome appears to be an essential assumption for anomalous motion to begin; any slightest elastic deformations of the dome due to finite rigidity of the dome destroy the shape of the dome required for indeterminism to obtain. I also consider several modifications of the original Norton's example and show that indeterminism in these cases, too, critically depends on the nature of certain idealizations pertaining to elastic properties of the bodies in these models. As a result, I argue that indeterminism of the Norton-type Lipschitz-indeterministic systems should rather be viewed as an artefact of certain (infinite) idealizations essential for the models, depriving the examples of much of their intended metaphysical import, as, for example, in Norton's antifundamentalist programme. Second, I examine the predictive computational limitations of a classical Laplace's demon. I demonstrate that in situations of self-fulfilling prognoses the class of undecidable propositions about certain future events, in general, is not empty; any Laplace's demon having all the information about the world now will be unable to predict all the future. In order to answer certain questions about the future it needs to resort occasionally to, or to consult with, a demon of a higher order in the computational hierarchy whose computational powers are beyond that of any Turing machine. In computer science such power is attributed to a theoretical device called an Oracle---a device capable of looking through an infinite domain in a finite time. I also discuss the distinction between ontological and epistemological views of determinism, and how adopting Wheeler-Landauer view of physical laws can entangle these aspects on a more fundamental level. Thirdly, I examine a recent proposal from the area of quantum computation purporting to utilize peculiarities of quantum reality to perform hypercomputation. While the current view is that quantum algorithms (such as Shor's) lead to re-description of the complexity space for computational problems, recently it has been argued (by Kieu) that certain novel quantum adiabatic algorithms may even require reconsideration of the whole notion of computability, by being able to break the Turing limit and "compute the non-computable". If implemented, such algorithms could serve as a physical realization of an Oracle needed for a Laplacian demon to accomplish its job. I critically review this latter proposal by exposing the weaknesses of Kieu's quantum adiabatic demon, pointing out its failure to deliver the purported hypercomputation. Regardless of whether the class of hypercomputers is non-empty, Kieu's proposed algorithm is not a member of this distinguished club, and a quantum computer powered Laplace's demon can do no more than its ordinary classical counterpart.
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Fitting power-laws in empirical data with estimators that work for all exponents
Hanel, Rudolf; Corominas-Murtra, Bernat; Liu, Bo; Thurner, Stefan
2017-01-01
Most standard methods based on maximum likelihood (ML) estimates of power-law exponents can only be reliably used to identify exponents smaller than minus one. The argument that power laws are otherwise not normalizable, depends on the underlying sample space the data is drawn from, and is true only for sample spaces that are unbounded from above. Power-laws obtained from bounded sample spaces (as is the case for practically all data related problems) are always free of such limitations and maximum likelihood estimates can be obtained for arbitrary powers without restrictions. Here we first derive the appropriate ML estimator for arbitrary exponents of power-law distributions on bounded discrete sample spaces. We then show that an almost identical estimator also works perfectly for continuous data. We implemented this ML estimator and discuss its performance with previous attempts. We present a general recipe of how to use these estimators and present the associated computer codes. PMID:28245249
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Central Limit Theorem: New SOCR Applet and Demonstration Activity
Dinov, Ivo D.; Christou, Nicolas; Sanchez, Juana
2011-01-01
Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multifaceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend (Applet: http://www.socr.ucla.edu/htmls/SOCR_Experiments.html and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem). PMID:21833159
Central Limit Theorem: New SOCR Applet and Demonstration Activity.
Dinov, Ivo D; Christou, Nicolas; Sanchez, Juana
2008-07-01
Modern approaches for information technology based blended education utilize a variety of novel instructional, computational and network resources. Such attempts employ technology to deliver integrated, dynamically linked, interactive content and multifaceted learning environments, which may facilitate student comprehension and information retention. In this manuscript, we describe one such innovative effort of using technological tools for improving student motivation and learning of the theory, practice and usability of the Central Limit Theorem (CLT) in probability and statistics courses. Our approach is based on harnessing the computational libraries developed by the Statistics Online Computational Resource (SOCR) to design a new interactive Java applet and a corresponding demonstration activity that illustrate the meaning and the power of the CLT. The CLT applet and activity have clear common goals; to provide graphical representation of the CLT, to improve student intuition, and to empirically validate and establish the limits of the CLT. The SOCR CLT activity consists of four experiments that demonstrate the assumptions, meaning and implications of the CLT and ties these to specific hands-on simulations. We include a number of examples illustrating the theory and applications of the CLT. Both the SOCR CLT applet and activity are freely available online to the community to test, validate and extend (Applet: http://www.socr.ucla.edu/htmls/SOCR_Experiments.html and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem).
Higher-order ice-sheet modelling accelerated by multigrid on graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian; Egholm, David
2013-04-01
Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.
Turbulence modeling of free shear layers for high performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas
1993-01-01
In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Benny N.
2000-01-01
There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA-based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.
Advanced Computational Methods for Security Constrained Financial Transmission Rights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria
Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less
Scaling Trapped Ion Quantum Computers Using Fast Gates and Microtraps
NASA Astrophysics Data System (ADS)
Ratcliffe, Alexander K.; Taylor, Richard L.; Hope, Joseph J.; Carvalho, André R. R.
2018-06-01
Most attempts to produce a scalable quantum information processing platform based on ion traps have focused on the shuttling of ions in segmented traps. We show that an architecture based on an array of microtraps with fast gates will outperform architectures based on ion shuttling. This system requires higher power lasers but does not require the manipulation of potentials or shuttling of ions. This improves optical access, reduces the complexity of the trap, and reduces the number of conductive surfaces close to the ions. The use of fast gates also removes limitations on the gate time. Error rates of 10-5 are shown to be possible with 250 mW laser power and a trap separation of 100 μ m . The performance of the gates is shown to be robust to the limitations in the laser repetition rate and the presence of many ions in the trap array.
Optimal design study of high efficiency indium phosphide space solar cells
NASA Technical Reports Server (NTRS)
Jain, Raj K.; Flood, Dennis J.
1990-01-01
Recently indium phosphide solar cells have achieved beginning of life AMO efficiencies in excess of 19 pct. at 25 C. The high efficiency prospects along with superb radiation tolerance make indium phosphide a leading material for space power requirements. To achieve cost effectiveness, practical cell efficiencies have to be raised to near theoretical limits and thin film indium phosphide cells need to be developed. The optimal design study is described of high efficiency indium phosphide solar cells for space power applications using the PC-1D computer program. It is shown that cells with efficiencies over 22 pct. AMO at 25 C could be fabricated by achieving proper material and process parameters. It is observed that further improvements in cell material and process parameters could lead to experimental cell efficiencies near theoretical limits. The effect of various emitter and base parameters on cell performance was studied.
Space Vehicle Powerdown Philosophies Derived from the Space Shuttle Program
NASA Technical Reports Server (NTRS)
Willsey, Mark; Bailey, Brad
2011-01-01
In spaceflight, electrical power is a vital but limited resource. Almost every spacecraft system, from avionics to life support systems, relies on electrical power. Since power can be limited by the generation system s performance, available consumables, solar array shading, or heat rejection capability, vehicle power management is a critical consideration in spacecraft design, mission planning, and real-time operations. The purpose of this paper is to capture the powerdown philosophies used during the Space Shuttle Program. This paper will discuss how electrical equipment is managed real-time to adjust the overall vehicle power level to ensure that systems and consumables will support changing mission objectives, as well as how electrical equipment is managed following system anomalies. We will focus on the power related impacts of anomalies in the generation systems, air and liquid cooling systems, and significant environmental events such as a fire, decrease in cabin pressure, or micrometeoroid debris strike. Additionally, considerations for executing powerdowns by crew action or by ground commands from Mission Control will be presented. General lessons learned from nearly 30 years of Space Shuttle powerdowns will be discussed, including an in depth case-study of STS-117. During this International Space Station (ISS) assembly mission, a failure of computers controlling the ISS guidance, navigation, and control system required that the Space Shuttle s maneuvering system be used to maintain attitude control. A powerdown was performed to save power generation consumables, thus extending the docked mission duration and allowing more time to resolve the issue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
A method to approximate a closest loadability limit using multiple load flow solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong
A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less
NASA Technical Reports Server (NTRS)
Manista, E. J.
1972-01-01
The effect of collector, guard-ring potential imbalance on the observed collector-current-density J, collector-to-emitter voltage V characteristic was evaluated in a planar, fixed-space, guard-ringed thermionic converter. The J,V characteristic was swept in a period of 15 msec by a variable load. A computerized data acquisition system recorded test parameters. The results indicate minimal distortion of the J,V curve in the power output quadrant for the nominal guard-ring circuit configuration. Considerable distortion, along with a lowering of the ignited-mode striking voltage, was observed for the configuration with the emitter shorted to the guard ring. A limited-range performance map of an etched-rhenium, niobium, planar converter was obtained by using an improved computer program for the data acquisition system.
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
The matter power spectrum in redshift space using effective field theory
NASA Astrophysics Data System (ADS)
Fonseca de la Bella, Lucía; Regan, Donough; Seery, David; Hotchkiss, Shaun
2017-11-01
The use of Eulerian 'standard perturbation theory' to describe mass assembly in the early universe has traditionally been limited to modes with k lesssim 0.1 h/Mpc at z=0. At larger k the SPT power spectrum deviates from measurements made using N-body simulations. Recently, there has been progress in extending the reach of perturbation theory to larger k using ideas borrowed from effective field theory. We revisit the computation of the redshift-space matter power spectrum within this framework, including for the first time the full one-loop time dependence. We use a resummation scheme proposed by Vlah et al. to account for damping of baryonic acoustic oscillations due to large-scale random motions and show that this has a significant effect on the multipole power spectra. We renormalize by comparison to a suite of custom N-body simulations matching the MultiDark MDR1 cosmology. At z=0 and for scales k lesssim 0.4 h/Mpc we find that the EFT furnishes a description of the real-space power spectrum up to ~ 2%, for the l = 0 mode up to ~ 5%, and for the l = 2, 4 modes up to ~ 25%. We argue that, in the MDR1 cosmology, positivity of the l=0 mode gives a firm upper limit of k ≈ 0.74 h/Mpc for the validity of the one-loop EFT prediction in redshift space using only the lowest-order counterterm. We show that replacing the one-loop growth factors by their Einstein-de Sitter counterparts is a good approximation for the l=0 mode, but can induce deviations as large as 2% for the l=2, 4 modes. An accompanying software bundle, distributed under open source licenses, includes Mathematica notebooks describing the calculation, together with parallel pipelines capable of computing both the necessary one-loop SPT integrals and the effective field theory counterterms.
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
Study of novel concepts of power transmission gears
NASA Technical Reports Server (NTRS)
Rivin, Eugene I.
1991-01-01
Two concepts in power transmission gear design are proposed which provide a potential for large noise reduction and for improving weight to payload ratio due to use of advanced fiber reinforced and ceramic materials. These concepts are briefly discussed. Since both concepts use ultrathin layered rubber-metal laminates for accommodating limited travel displacements, properties of the laminates, such as their compressive strength, compressive and shear moduli were studied. Extensive testing and computational analysis were performed on the first concept gears (laminate coated conformal gears). Design and testing of the second conceptual design (composite gear with separation of sliding and rolling motions) are specifically described.
Capacity of a direct detection optical communication channel
NASA Technical Reports Server (NTRS)
Tan, H. H.
1980-01-01
The capacity of a free space optical channel using a direct detection receiver is derived under both peak and average signal power constraints and without a signal bandwidth constraint. The addition of instantaneous noiseless feedback from the receiver to the transmitter does not increase the channel capacity. In the absence of received background noise, an optimally coded PPM system is shown to achieve capacity in the limit as signal bandwidth approaches infinity. In the case of large peak to average signal power ratios, an interleaved coding scheme with PPM modulation is shown to have a computational cutoff rate far greater than ordinary coding schemes.
AEROSOL PARTICLE COLLECTOR DESIGN STUDY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Richard Dimenna, R
2007-09-27
A computational evaluation of a particle collector design was performed to evaluate the behavior of aerosol particles in a fast flowing gas stream. The objective of the work was to improve the collection efficiency of the device while maintaining a minimum specified air throughput, nominal collector size, and minimal power requirements. The impact of a range of parameters was considered subject to constraints on gas flow rate, overall collector dimensions, and power limitations. Potential improvements were identified, some of which have already been implemented. Other more complex changes were identified and are described here for further consideration. In addition, fruitfulmore » areas for further study are proposed.« less
Comparison among mathematical models of the photovoltaic cell for computer simulation purposes
NASA Astrophysics Data System (ADS)
Tofoli, Fernando Lessa; Pereira, Denis de Castro; Josias De Paula, Wesley; Moreira Vicente, Eduardo; Vicente, Paula dos Santos; Braga, Henrique Antonio Carvalho
2017-07-01
This paper presents a comparison among mathematical models used in the simulation of solar photovoltaic modules that can be easily integrated with power electronic converters. In order to perform the analysis, three models available in literature and also the physical model of the module in software PSIM® are used. Some results regarding the respective I × V and P × V curves are presented, while some advantages and eventual limitations are discussed. Besides, a DC-DC buck converter performs maximum power point tracking by using perturb and observe method, while the performance of each one of the aforementioned models is investigated.
An improved model for whole genome phylogenetic analysis by Fourier transform.
Yin, Changchuan; Yau, Stephen S-T
2015-10-07
DNA sequence similarity comparison is one of the major steps in computational phylogenetic studies. The sequence comparison of closely related DNA sequences and genomes is usually performed by multiple sequence alignments (MSA). While the MSA method is accurate for some types of sequences, it may produce incorrect results when DNA sequences undergone rearrangements as in many bacterial and viral genomes. It is also limited by its computational complexity for comparing large volumes of data. Previously, we proposed an alignment-free method that exploits the full information contents of DNA sequences by Discrete Fourier Transform (DFT), but still with some limitations. Here, we present a significantly improved method for the similarity comparison of DNA sequences by DFT. In this method, we map DNA sequences into 2-dimensional (2D) numerical sequences and then apply DFT to transform the 2D numerical sequences into frequency domain. In the 2D mapping, the nucleotide composition of a DNA sequence is a determinant factor and the 2D mapping reduces the nucleotide composition bias in distance measure, and thus improving the similarity measure of DNA sequences. To compare the DFT power spectra of DNA sequences with different lengths, we propose an improved even scaling algorithm to extend shorter DFT power spectra to the longest length of the underlying sequences. After the DFT power spectra are evenly scaled, the spectra are in the same dimensionality of the Fourier frequency space, then the Euclidean distances of full Fourier power spectra of the DNA sequences are used as the dissimilarity metrics. The improved DFT method, with increased computational performance by 2D numerical representation, can be applicable to any DNA sequences of different length ranges. We assess the accuracy of the improved DFT similarity measure in hierarchical clustering of different DNA sequences including simulated and real datasets. The method yields accurate and reliable phylogenetic trees and demonstrates that the improved DFT dissimilarity measure is an efficient and effective similarity measure of DNA sequences. Due to its high efficiency and accuracy, the proposed DFT similarity measure is successfully applied on phylogenetic analysis for individual genes and large whole bacterial genomes. Copyright © 2015 Elsevier Ltd. All rights reserved.
High-power graphic computers for visual simulation: a real-time--rendering revolution
NASA Technical Reports Server (NTRS)
Kaiser, M. K.
1996-01-01
Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.
Sunohara, Tetsu; Hirata, Akimasa; Laakso, Ilkka; Onishi, Teruo
2014-07-21
This study investigates the specific absorption rate (SAR) and the in situ electric field in anatomically based human models for the magnetic field from an inductive wireless power transfer system developed on the basis of the specifications of the wireless power consortium. The transfer system consists of two induction coils covered by magnetic sheets. Both the waiting and charging conditions are considered. The transfer frequency considered in this study is 140 kHz, which is within the range where the magneto-quasi-static approximation is valid. The SAR and in situ electric field in the chest and arm of the models are calculated by numerically solving the scalar potential finite difference equation. The electromagnetic modelling of the coils in the wireless power transfer system is verified by comparing the computed and measured magnetic field distributions. The results indicate that the peak value of the SAR averaged over a 10 g of tissue and that of the in situ electric field are 72 nW kg(-1) and 91 mV m(-1) for a transmitted power of 1 W, Consequently, the maximum allowable transmitted powers satisfying the exposure limits of the SAR (2 W kg(-1)) and the in situ electric field (18.9 V m(-1)) are found to be 28 MW and 43 kW. The computational results show that the in situ electric field in the chest is the most restrictive factor when compliance with the wireless power transfer system is evaluated according to international guidelines.
Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles
NASA Technical Reports Server (NTRS)
Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick
2012-01-01
Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.
Trellis-coded CPM for satellite-based mobile communications
NASA Technical Reports Server (NTRS)
Abrishamkar, Farrokh; Biglieri, Ezio
1988-01-01
Digital transmission for satellite-based land mobile communications is discussed. To satisfy the power and bandwidth limitations imposed on such systems, a combination of trellis coding and continuous-phase modulated signals are considered. Some schemes based on this idea are presented, and their performance is analyzed by computer simulation. The results obtained show that a scheme based on directional detection and Viterbi decoding appears promising for practical applications.
Load flows and faults considering dc current injections
NASA Technical Reports Server (NTRS)
Kusic, G. L.; Beach, R. F.
1991-01-01
The authors present novel methods for incorporating current injection sources into dc power flow computations and determining network fault currents when electronic devices limit fault currents. Combinations of current and voltage sources into a single network are considered in a general formulation. An example of relay coordination is presented. The present study is pertinent to the development of the Space Station Freedom electrical generation, transmission, and distribution system.
Towards a real-time wide area motion imagery system
NASA Astrophysics Data System (ADS)
Young, R. I.; Foulkes, S. B.
2015-10-01
It is becoming increasingly important in both the defence and security domains to conduct persistent wide area surveillance (PWAS) of large populations of targets. Wide Area Motion Imagery (WAMI) is a key technique for achieving this wide area surveillance. The recent development of multi-million pixel sensors has provided sensors with wide field of view replete with sufficient resolution for detection and tracking of objects of interest to be achieved across these extended areas of interest. WAMI sensors simultaneously provide high spatial and temporal resolutions, giving extreme pixel counts over large geographical areas. The high temporal resolution is required to enable effective tracking of targets. The provision of wide area coverage with high frame rates generates data deluge issues; these are especially profound if the sensor is mounted on an airborne platform, with finite data-link bandwidth and processing power that is constrained by size, weight and power (SWAP) limitations. These issues manifest themselves either as bottlenecks in the transmission of the imagery off-board or as latency in the time taken to analyse the data due to limited computational processing power.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alchorn, A L
Thank you for your interest in the activities of the Lawrence Livermore National Laboratory Computation Directorate. This collection of articles from the Laboratory's Science & Technology Review highlights the most significant computational projects, achievements, and contributions during 2002. In 2002, LLNL marked the 50th anniversary of its founding. Scientific advancement in support of our national security mission has always been the core of the Laboratory. So that researchers could better under and predict complex physical phenomena, the Laboratory has pushed the limits of the largest, fastest, most powerful computers in the world. In the late 1950's, Edward Teller--one of themore » LLNL founders--proposed that the Laboratory commission a Livermore Advanced Research Computer (LARC) built to Livermore's specifications. He tells the story of being in Washington, DC, when John Von Neumann asked to talk about the LARC. He thought Teller wanted too much memory in the machine. (The specifications called for 20-30,000 words.) Teller was too smart to argue with him. Later Teller invited Von Neumann to the Laboratory and showed him one of the design codes being prepared for the LARC. He asked Von Neumann for suggestions on fitting the code into 10,000 words of memory, and flattered him about ''Labbies'' not being smart enough to figure it out. Von Neumann dropped his objections, and the LARC arrived with 30,000 words of memory. Memory, and how close memory is to the processor, is still of interest to us today. Livermore's first supercomputer was the Remington-Rand Univac-1. It had 5600 vacuum tubes and was 2 meters wide by 4 meters long. This machine was commonly referred to as a 1 KFlop machine [E+3]. Skip ahead 50 years. The ASCI White machine at the Laboratory today, produced by IBM, is rated at a peak performance of 12.3 TFlops or E+13. We've improved computer processing power by 10 orders of magnitude in 50 years, and I do not believe there's any reason to think we won't improve another 10 orders of magnitude in the next 50 years. For years I have heard talk of hitting the physical limits of Moore's Law, but new technologies will take us into the next phase of computer processing power such as 3-D chips, molecular computing, quantum computing, and more. Big computers are icons or symbols of the culture and larger infrastructure that exists at LLNL to guide scientific discovery and engineering development. We have dealt with balance issues for 50 years and will continue to do so in our quest for a digital proxy of the properties of matter at extremely high temperatures and pressures. I believe that the next big computational win will be the merger of high-performance computing with information management. We already create terabytes--soon to be petabytes--of data. Efficiently storing, finding, visualizing and extracting data and turning that into knowledge which aids decision-making and scientific discovery is an exciting challenge. In the meantime, please enjoy this retrospective on computational physics, computer science, advanced software technologies, and applied mathematics performed by programs and researchers at LLNL during 2002. It offers a glimpse into the stimulating world of computational science in support of the national missions and homeland defense.« less
NASA Astrophysics Data System (ADS)
Hogenson, K.; Arko, S. A.; Buechler, B.; Hogenson, R.; Herrmann, J.; Geiger, A.
2016-12-01
A problem often faced by Earth science researchers is how to scale algorithms that were developed against few datasets and take them to regional or global scales. One significant hurdle can be the processing and storage resources available for such a task, not to mention the administration of those resources. As a processing environment, the cloud offers nearly unlimited potential for compute and storage, with limited administration required. The goal of the Hybrid Pluggable Processing Pipeline (HyP3) project was to demonstrate the utility of the Amazon cloud to process large amounts of data quickly and cost effectively, while remaining generic enough to incorporate new algorithms with limited administration time or expense. Principally built by three undergraduate students at the ASF DAAC, the HyP3 system relies on core Amazon services such as Lambda, the Simple Notification Service (SNS), Relational Database Service (RDS), Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Elastic Beanstalk. The HyP3 user interface was written using elastic beanstalk, and the system uses SNS and Lamdba to handle creating, instantiating, executing, and terminating EC2 instances automatically. Data are sent to S3 for delivery to customers and removed using standard data lifecycle management rules. In HyP3 all data processing is ephemeral; there are no persistent processes taking compute and storage resources or generating added cost. When complete, HyP3 will leverage the automatic scaling up and down of EC2 compute power to respond to event-driven demand surges correlated with natural disaster or reprocessing efforts. Massive simultaneous processing within EC2 will be able match the demand spike in ways conventional physical computing power never could, and then tail off incurring no costs when not needed. This presentation will focus on the development techniques and technologies that were used in developing the HyP3 system. Data and process flow will be shown, highlighting the benefits of the cloud for each step. Finally, the steps for integrating a new processing algorithm will be demonstrated. This is the true power of HyP3; allowing people to upload their own algorithms and execute them at archive level scales.
Musrfit-Real Time Parameter Fitting Using GPUs
NASA Astrophysics Data System (ADS)
Locans, Uldis; Suter, Andreas
High transverse field μSR (HTF-μSR) experiments typically lead to a rather large data sets, since it is necessary to follow the high frequencies present in the positron decay histograms. The analysis of these data sets can be very time consuming, usually due to the limited computational power of the hardware. To overcome the limited computing resources rotating reference frame transformation (RRF) is often used to reduce the data sets that need to be handled. This comes at a price typically the μSR community is not aware of: (i) due to the RRF transformation the fitting parameter estimate is of poorer precision, i.e., more extended expensive beamtime is needed. (ii) RRF introduces systematic errors which hampers the statistical interpretation of χ2 or the maximum log-likelihood. We will briefly discuss these issues in a non-exhaustive practical way. The only and single purpose of the RRF transformation is the sluggish computer power. Therefore during this work GPU (Graphical Processing Units) based fitting was developed which allows to perform real-time full data analysis without RRF. GPUs have become increasingly popular in scientific computing in recent years. Due to their highly parallel architecture they provide the opportunity to accelerate many applications with considerably less costs than upgrading the CPU computational power. With the emergence of frameworks such as CUDA and OpenCL these devices have become more easily programmable. During this work GPU support was added to Musrfit- a data analysis framework for μSR experiments. The new fitting algorithm uses CUDA or OpenCL to offload the most time consuming parts of the calculations to Nvidia or AMD GPUs. Using the current CPU implementation in Musrfit parameter fitting can take hours for certain data sets while the GPU version can allow to perform real-time data analysis on the same data sets. This work describes the challenges that arise in adding the GPU support to t as well as results obtained using the GPU version. The speedups using the GPU were measured comparing to the CPU implementation. Two different GPUs were used for the comparison — high end Nvidia Tesla K40c GPU designed for HPC applications and AMD Radeon R9 390× GPU designed for gaming industry.
Resilient off-grid microgrids: Capacity planning and N-1 security
Madathil, Sreenath Chalil; Yamangil, Emre; Nagarajan, Harsha; ...
2017-06-13
Over the past century the electric power industry has evolved to support the delivery of power over long distances with highly interconnected transmission systems. Despite this evolution, some remote communities are not connected to these systems. These communities rely on small, disconnected distribution systems, i.e., microgrids to deliver power. However, as microgrids often are not held to the same reliability standards as transmission grids, remote communities can be at risk for extended blackouts. To address this issue, we develop an optimization model and an algorithm for capacity planning and operations of microgrids that include N-1 security and other practical modelingmore » features like AC power flow physics, component efficiencies and thermal limits. Lastly, we demonstrate the computational effectiveness of our approach on two test systems; a modified version of the IEEE 13 node test feeder and a model of a distribution system in a remote community in Alaska.« less
Power Spectrum of a Noisy System Close to a Heteroclinic Orbit
NASA Astrophysics Data System (ADS)
Giner-Baldó, Jordi; Thomas, Peter J.; Lindner, Benjamin
2017-07-01
We consider a two-dimensional dynamical system that possesses a heteroclinic orbit connecting four saddle points. This system is not able to show self-sustained oscillations on its own. If endowed with white Gaussian noise it displays stochastic oscillations, the frequency and quality factor of which are controlled by the noise intensity. This stochastic oscillation of a nonlinear system with noise is conveniently characterized by the power spectrum of suitable observables. In this paper we explore different analytical and semianalytical ways to compute such power spectra. Besides a number of explicit expressions for the power spectrum, we find scaling relations for the frequency, spectral width, and quality factor of the stochastic heteroclinic oscillator in the limit of weak noise. In particular, the quality factor shows a slow logarithmic increase with decreasing noise of the form Q˜ [ln (1/D)]^2. Our results are compared to numerical simulations of the respective Langevin equations.
Resilient off-grid microgrids: Capacity planning and N-1 security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madathil, Sreenath Chalil; Yamangil, Emre; Nagarajan, Harsha
Over the past century the electric power industry has evolved to support the delivery of power over long distances with highly interconnected transmission systems. Despite this evolution, some remote communities are not connected to these systems. These communities rely on small, disconnected distribution systems, i.e., microgrids to deliver power. However, as microgrids often are not held to the same reliability standards as transmission grids, remote communities can be at risk for extended blackouts. To address this issue, we develop an optimization model and an algorithm for capacity planning and operations of microgrids that include N-1 security and other practical modelingmore » features like AC power flow physics, component efficiencies and thermal limits. Lastly, we demonstrate the computational effectiveness of our approach on two test systems; a modified version of the IEEE 13 node test feeder and a model of a distribution system in a remote community in Alaska.« less
Generative Representations for Computer-Automated Evolutionary Design
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2006-01-01
With the increasing computational power of computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D objects.
Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.
Montalvo-Acosta, Joel José; Cecchini, Marco
2016-12-01
The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Experience with a sophisticated computer based authoring system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, P.R.
1984-04-01
In the November 1982 issue of ADCIS SIG CBT Newsletter the editor arrives at two conclusions regarding Computer Based Authoring Systems (CBAS): (1) CBAS drastically reduces programming time and the need for expert programmers, and (2) CBAS appears to have minimal impact on initial lesson design. Both of these comments have significant impact on any Cost-Benefit analysis for Computer-Based Training. The first tends to improve cost-effectiveness but only toward the limits imposed by the second. Westinghouse Hanford Company (WHC) recently purchased a sophisticated CBAS, the WISE/SMART system from Wicat (Orem, UT), for use in the Nuclear Power Industry. This reportmore » details our experience with this system relative to Items (1) and (2) above; lesson design time will be compared with lesson input time. Also provided will be the WHC experience in the use of subject matter experts (though computer neophytes) for the design and inputting of CBT materials.« less
Generative Representations for Computer-Automated Design Systems
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2004-01-01
With the increasing computational power of Computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design programs is the representation with which they encode designs. If the representation cannot encode a certain design, then the design program cannot produce it. Similarly, a poor representation makes some types of designs extremely unlikely to be created. Here we define generative representations as those representations which can create and reuse organizational units within a design and argue that reuse is necessary for design systems to scale to more complex and interesting designs. To support our argument we describe GENRE, an evolutionary design program that uses both a generative and a non-generative representation, and compare the results of evolving designs with both types of representations.
Improving the energy efficiency of sparse linear system solvers on multicore and manycore systems.
Anzt, H; Quintana-Ortí, E S
2014-06-28
While most recent breakthroughs in scientific research rely on complex simulations carried out in large-scale supercomputers, the power draft and energy spent for this purpose is increasingly becoming a limiting factor to this trend. In this paper, we provide an overview of the current status in energy-efficient scientific computing by reviewing different technologies used to monitor power draft as well as power- and energy-saving mechanisms available in commodity hardware. For the particular domain of sparse linear algebra, we analyse the energy efficiency of a broad collection of hardware architectures and investigate how algorithmic and implementation modifications can improve the energy performance of sparse linear system solvers, without negatively impacting their performance. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Acousto-optic time- and space-integrating spotlight-mode SAR processor
NASA Astrophysics Data System (ADS)
Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.
1993-09-01
The technical approach and recent experimental results for the acousto-optic time- and space- integrating real-time SAR image formation processor program are reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results include a demonstration of the processor's ability to perform high-resolution spotlight-mode SAR imaging by simultaneously compensating for range migration and range/azimuth coupling in the analog optical domain, thereby avoiding a highly power-consuming digital interpolation or reformatting operation usually required in all-electronic approaches.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
Persistent Memory in Single Node Delay-Coupled Reservoir Computing.
Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem
2016-01-01
Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.
Persistent Memory in Single Node Delay-Coupled Reservoir Computing
Pipa, Gordon; Toutounji, Hazem
2016-01-01
Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality. PMID:27783690
Rich client data exploration and research prototyping for NOAA
NASA Astrophysics Data System (ADS)
Grossberg, Michael; Gladkova, Irina; Guch, Ingrid; Alabi, Paul; Shahriar, Fazlul; Bonev, George; Aizenman, Hannah
2009-08-01
Data from satellites and model simulations is increasing exponentially as observations and model computing power improve rapidly. Not only is technology producing more data, but it often comes from sources all over the world. Researchers and scientists who must collaborate are also located globally. This work presents a software design and technologies which will make it possible for groups of researchers to explore large data sets visually together without the need to download these data sets locally. The design will also make it possible to exploit high performance computing remotely and transparently to analyze and explore large data sets. Computer power, high quality sensing, and data storage capacity have improved at a rate that outstrips our ability to develop software applications that exploit these resources. It is impractical for NOAA scientists to download all of the satellite and model data that may be relevant to a given problem and the computing environments available to a given researcher range from supercomputers to only a web browser. The size and volume of satellite and model data are increasing exponentially. There are at least 50 multisensor satellite platforms collecting Earth science data. On the ground and in the sea there are sensor networks, as well as networks of ground based radar stations, producing a rich real-time stream of data. This new wealth of data would have limited use were it not for the arrival of large-scale high-performance computation provided by parallel computers, clusters, grids, and clouds. With these computational resources and vast archives available, it is now possible to analyze subtle relationships which are global, multi-modal and cut across many data sources. Researchers, educators, and even the general public, need tools to access, discover, and use vast data center archives and high performance computing through a simple yet flexible interface.
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2016-12-01
Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.
A review of acoustic power transfer for bio-medical implants
NASA Astrophysics Data System (ADS)
Basaeri, Hamid; Christensen, David B.; Roundy, Shad
2016-12-01
Bio-implantable devices have been used to perform therapeutic functions such as drug delivery or diagnostic monitoring of physiological parameters. Proper operation of these devices depends on the continuous reliable supply of power. A battery, which is the conventional method to supply energy, is problematic in many of these devices as it limits the lifetime of the implant or dominates the size. In order to power implantable devices, power transfer techniques have been implemented as an attractive alternative to batteries and have received significant research interest in recent years. Acoustic waves are increasingly being investigated as a method for delivering power through human skin and the human body. Acoustic power transfer (APT) has some advantages over other powering techniques such as inductive power transfer and mid range RF power transmission. These advantages include lower absorption in tissue, shorter wavelength enabling smaller transducers, and higher power intensity threshold for safe operation. This paper will cover the basic physics and modeling of APT and will review the current state of acoustic (or ultrasonic) power transfer for biomedical implants. As the sensing and computational elements for biomedical implants are becoming very small, we devote particular attention to the scaling of acoustic and alternative power transfer techniques. Finally, we present current issues and challenges related to the implementation of this technique for powering implantable devices.
MicroSensors Systems: detection of a dismounted threat
NASA Astrophysics Data System (ADS)
Davis, Bill; Berglund, Victor; Falkofske, Dwight; Krantz, Brian
2005-05-01
The Micro Sensor System (MSS) is a layered sensor network with the goal of detecting dismounted threats approaching high value assets. A low power unattended ground sensor network is dependant on a network protocol for efficiency in order to minimize data transmissions after network establishment. The reduction of network 'chattiness' is a primary driver for minimizing power consumption and is a factor in establishing a low probability of detection and interception. The MSS has developed a unique protocol to meet these challenges. Unattended ground sensor systems are most likely dependant on batteries for power which due to size determines the ability of the sensor to be concealed after placement. To minimize power requirements, overcome size limitations, and maintain a low system cost the MSS utilizes advanced manufacturing processes know as Fluidic Self-Assembly and Chip Scale Packaging. The type of sensing element and the ability to sense various phenomenologies (particularly magnetic) at ranges greater than a few meters limits the effectiveness of a system. The MicroSensor System will overcome these limitations by deploying large numbers of low cost sensors, which is made possible by the advanced manufacturing process used in production of the sensors. The MSS program will provide unprecedented levels of real-time battlefield information which greatly enhances combat situational awareness when integrated with the existing Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) infrastructure. This system will provide an important boost to realizing the information dominant, network-centric objective of Joint Vision 2020.
Subsystems component definitions summary program
NASA Technical Reports Server (NTRS)
Scott, A. Don; Thomas, Carolyn C.; Simonsen, Lisa C.; Hall, John B., Jr.
1991-01-01
A computer program, the Subsystems Component Definitions Summary (SUBCOMDEF), was developed to provide a quick and efficient means of summarizing large quantities of subsystems component data in terms of weight, volume, resupply, and power. The program was validated using Space Station Freedom Program Definition Requirements Document data for the internal and external thermal control subsystem. Once all component descriptions, unit weights and volumes, resupply, and power data are input, the user may obtain a summary report of user-specified portions of the subsystem or of the entire subsystem as a whole. Any combination or all of the parameters of wet and dry weight, wet and dry volume, resupply weight and volume, and power may be displayed. The user may vary the resupply period according to individual mission requirements, as well as the number of hours per day power consuming components operate. Uses of this program are not limited only to subsystem component summaries. Any applications that require quick, efficient, and accurate weight, volume, resupply, or power summaries would be well suited to take advantage of SUBCOMDEF's capabilities.
Jet impingement heat transfer enhancement for the GPU-3 Stirling engine
NASA Technical Reports Server (NTRS)
Johnson, D. C.; Congdon, C. W.; Begg, L. L.; Britt, E. J.; Thieme, L. G.
1981-01-01
A computer model of the combustion-gas-side heat transfer was developed to predict the effects of a jet impingement system and the possible range of improvements available. Using low temperature (315 C (600 F)) pretest data in an updated model, a high temperature silicon carbide jet impingement heat transfer system was designed and fabricated. The system model predicted that at the theoretical maximum limit, jet impingement enhanced heat transfer can: (1) reduce the flame temperature by 275 C (500 F); (2) reduce the exhaust temperature by 110 C (200 F); and (3) increase the overall heat into the working fluid by 10%, all for an increase in required pumping power of less than 0.5% of the engine power output. Initial tests on the GPU-3 Stirling engine at NASA-Lewis demonstrated that the jet impingement system increased the engine output power and efficiency by 5% - 8% with no measurable increase in pumping power. The overall heat transfer coefficient was increased by 65% for the maximum power point of the tests.
Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena
2010-09-30
Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less
Cisco Networking Academy Program for high school students: Formative & summative evaluation
NASA Astrophysics Data System (ADS)
Cranford-Wesley, Deanne
This study examined the effectiveness of the Cisco Network Technology Program in enhancing students' technology skills as measured by classroom strategies, student motivation, student attitude, and student learning. Qualitative and quantitative methods were utilized to determine the effectiveness of this program. The study focused on two 11th grade classrooms at Hamtramck High School. Hamtramck, an inner-city community located in Detroit, is racially and ethnically diverse. The majority of students speak English as a second language; more than 20 languages are represented in the school district. More than 70% of the students are considered to be economically at risk. Few students have computers at home, and their access to the few computers at school is limited. Purposive sampling was conducted for this study. The sample consisted of 40 students, all of whom were trained in Cisco Networking Technologies. The researcher examined viable learning strategies in teaching a Cisco Networking class that focused on a web-based approach. Findings revealed that the Cisco Networking Academy Program was an excellent vehicle for teaching networking skills and, therefore, helping to enhance computer skills for the participating students. However, only a limited number of students were able to participate in the program, due to limited computer labs and lack of qualified teaching personnel. In addition, the cumbersome technical language posed an obstacle to students' success in networking. Laboratory assignments were preferred by 90% of the students over lecture and PowerPoint presentations. Practical applications, lab projects, interactive assignments, PowerPoint presentations, lectures, discussions, readings, research, and assessment all helped to increase student learning and proficiency and to enrich the classroom experience. Classroom strategies are crucial to student success in the networking program. Equipment must be updated and utilized to ensure that students are applying practical skills to networking concepts. The results also suggested a high level of motivation and retention in student participants. Students in both classes scored 80% proficiency on the Achievement Motivation Profile Assessment. The identified standard proficiency score was 70%, and both classes exceeded the standard.
Decision-Theoretic Control of Planetary Rovers
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo; Washington, Richard; Bernstein, Daniel S.; Mouaddib, Abdel-Illah; Morris, Robert (Technical Monitor)
2003-01-01
Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We describe two decision-theoretic approaches to maximize the productivity of planetary rovers: one based on adaptive planning and the other on hierarchical reinforcement learning. Both approaches map the problem into a Markov decision problem and attempt to solve a large part of the problem off-line, exploiting the structure of the plan and independence between plan components. We examine the advantages and limitations of these techniques and their scalability.
Bounds on the power of proofs and advice in general physical theories.
Lee, Ciarán M; Hoban, Matty J
2016-06-01
Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.
Thermoelectric properties of an interacting quantum dot based heat engine
NASA Astrophysics Data System (ADS)
Erdman, Paolo Andrea; Mazza, Francesco; Bosisio, Riccardo; Benenti, Giuliano; Fazio, Rosario; Taddei, Fabio
2017-06-01
We study the thermoelectric properties and heat-to-work conversion performance of an interacting, multilevel quantum dot (QD) weakly coupled to electronic reservoirs. We focus on the sequential tunneling regime. The dynamics of the charge in the QD is studied by means of master equations for the probabilities of occupation. From here we compute the charge and heat currents in the linear response regime. Assuming a generic multiterminal setup, and for low temperatures (quantum limit), we obtain analytical expressions for the transport coefficients which account for the interplay between interactions (charging energy) and level quantization. In the case of systems with two and three terminals we derive formulas for the power factor Q and the figure of merit Z T for a QD-based heat engine, identifying optimal working conditions which maximize output power and efficiency of heat-to-work conversion. Beyond the linear response we concentrate on the two-terminal setup. We first study the thermoelectric nonlinear coefficients assessing the consequences of large temperature and voltage biases, focusing on the breakdown of the Onsager reciprocal relation between thermopower and Peltier coefficient. We then investigate the conditions which optimize the performance of a heat engine, finding that in the quantum limit output power and efficiency at maximum power can almost be simultaneously maximized by choosing appropriate values of electrochemical potential and bias voltage. At last we study how energy level degeneracy can increase the output power.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it
2012-12-01
A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less
Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors
NASA Technical Reports Server (NTRS)
Flatley, Thomas P.
2015-01-01
SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.
Real-Time Spatio-Temporal Twice Whitening for MIMO Energy Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; Mitra, Pramita; Barhen, Jacob
2010-01-01
While many techniques exist for local spectrum sensing of a primary user, each represents a computationally demanding task to secondary user receivers. In software-defined radio, computational complexity lengthens the time for a cognitive radio to recognize changes in the transmission environment. This complexity is even more significant for spatially multiplexed receivers, e.g., in SIMO and MIMO, where the spatio-temporal data sets grow in size with the number of antennae. Limits on power and space for the processor hardware further constrain SDR performance. In this report, we discuss improvements in spatio-temporal twice whitening (STTW) for real-time local spectrum sensing by demonstratingmore » a form of STTW well suited for MIMO environments. We implement STTW on the Coherent Logix hx3100 processor, a multicore processor intended for low-power, high-throughput software-defined signal processing. These results demonstrate how coupling the novel capabilities of emerging multicore processors with algorithmic advances can enable real-time, software-defined processing of large spatio-temporal data sets.« less
Sleep apps: what role do they play in clinical medicine?
Lorenz, Christopher P; Williams, Adrian J
2017-11-01
Today's smartphones boast more computing power than the Apollo Guidance Computer. Given the ubiquity and popularity of smartphones, are we already carrying around miniaturized sleep labs in our pockets? There is still a lack of validation studies for consumer sleep technologies in general and apps for monitoring sleep in particular. To overcome this gap, multidisciplinary teams are needed that focus on feasibility work at the intersection of software engineering, data science and clinical sleep medicine. To date, no smartphone app for monitoring sleep through movement sensors has been successfully validated against polysomnography, despite the role and validity of actigraphy in sleep medicine having been well established. Missing separation of concerns, not methodology, poses the key limiting factor: The two essential steps in the monitoring process, data collection and scoring, are chained together inside a black box due to the closed nature of consumer devices. This leaves researchers with little room for influence nor can they access raw data. Multidisciplinary teams that wield complete power over the sleep monitoring process are sorely needed.
Power limits for microbial life.
LaRowe, Douglas E; Amend, Jan P
2015-01-01
To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).
Convergent method of and apparatus for distributed control of robotic systems using fuzzy logic
Feddema, John T.; Driessen, Brian J.; Kwok, Kwan S.
2002-01-01
A decentralized fuzzy logic control system for one vehicle or for multiple robotic vehicles provides a way to control each vehicle to converge on a goal without collisions between vehicles or collisions with other obstacles, in the presence of noisy input measurements and a limited amount of compute-power and memory on board each robotic vehicle. The fuzzy controller demonstrates improved robustness to noise relative to an exact controller.
Applications of charge-coupled device transversal filters to communication
NASA Technical Reports Server (NTRS)
Buss, D. D.; Bailey, W. H.; Brodersen, R. W.; Hewes, C. R.; Tasch, A. F., Jr.
1975-01-01
The paper discusses the computational power of state-of-the-art charged-coupled device (CCD) transversal filters in communications applications. Some of the performance limitations of CCD transversal filters are discussed, with attention given to time delay and bandwidth, imperfect charge transfer efficiency, weighting coefficient error, noise, and linearity. The application of CCD transversal filters to matched filtering, spectral filtering, and Fourier analysis is examined. Techniques for making programmable transversal filters are briefly outlined.
Absil, Philippe P; Verheyen, Peter; De Heyn, Peter; Pantouvaki, Marianna; Lepage, Guy; De Coster, Jeroen; Van Campenhout, Joris
2015-04-06
Silicon photonics integrated circuits are considered to enable future computing systems with optical input-outputs co-packaged with CMOS chips to circumvent the limitations of electrical interfaces. In this paper we present the recent progress made to enable dense multiplexing by exploiting the integration advantage of silicon photonics integrated circuits. We also discuss the manufacturability of such circuits, a key factor for a wide adoption of this technology.
ERIC Educational Resources Information Center
Dehinbo, Johnson
2010-01-01
The use of email utilizes the power of Web 1.0 to enable users to access their email from any computer and mobile devices that is connected to the Internet making email valuable in acquiring and transferring knowledge. But the advent of Web 2.0 and social networking seems to indicate certain limitations of email. The use of social networking seems…
Predicting solar radiation based on available weather indicators
NASA Astrophysics Data System (ADS)
Sauer, Frank Joseph
Solar radiation prediction models are complex and require software that is not available for the household investor. The processing power within a normal desktop or laptop computer is sufficient to calculate similar models. This barrier to entry for the average consumer can be fixed by a model simple enough to be calculated by hand if necessary. Solar radiation modeling has been historically difficult to predict and accurate models have significant assumptions and restrictions on their use. Previous methods have been limited to linear relationships, location restrictions, or input data limits to one atmospheric condition. This research takes a novel approach by combining two techniques within the computational limits of a household computer; Clustering and Hidden Markov Models (HMMs). Clustering helps limit the large observation space which restricts the use of HMMs. Instead of using continuous data, and requiring significantly increased computations, the cluster can be used as a qualitative descriptor of each observation. HMMs incorporate a level of uncertainty and take into account the indirect relationship between meteorological indicators and solar radiation. This reduces the complexity of the model enough to be simply understood and accessible to the average household investor. The solar radiation is considered to be an unobservable state that each household will be unable to measure. The high temperature and the sky coverage are already available through the local or preferred source of weather information. By using the next day's prediction for high temperature and sky coverage, the model groups the data and then predicts the most likely range of radiation. This model uses simple techniques and calculations to give a broad estimate for the solar radiation when no other universal model exists for the average household.
Heat-Assisted Magnetic Recording: Fundamental Limits to Inverse Electromagnetic Design
NASA Astrophysics Data System (ADS)
Bhargava, Samarth
In this dissertation, we address the burgeoning fields of diffractive optics, metals-optics and plasmonics, and computational inverse problems in the engineering design of electromagnetic structures. We focus on the application of the optical nano-focusing system that will enable Heat-Assisted Magnetic Recording (HAMR), a higher density magnetic recording technology that will fulfill the exploding worldwide demand of digital data storage. The heart of HAMR is a system that focuses light to a nano- sub-diffraction-limit spot with an extremely high power density via an optical antenna. We approach this engineering problem by first discussing the fundamental limits of nano-focusing and the material limits for metal-optics and plasmonics. Then, we use efficient gradient-based optimization algorithms to computationally design shapes of 3D nanostructures that outperform human designs on the basis of mass-market product requirements. In 2014, the world manufactured ˜1 zettabyte (ZB), ie. 1 Billion terabytes (TBs), of data storage devices, including ˜560 million magnetic hard disk drives (HDDs). Global demand of storage will likely increase by 10x in the next 5-10 years, and manufacturing capacity cannot keep up with demand alone. We discuss the state-of-art HDD and why industry invented Heat-Assisted Magnetic Recording (HAMR) to overcome the data density limitations. HAMR leverages the temperature sensitivity of magnets, in which the coercivity suddenly and non-linearly falls at the Curie temperature. Data recording to high-density hard disks can be achieved by locally heating one bit of information while co-applying a magnetic field. The heating can be achieved by focusing 100 microW of light to a 30nm diameter spot on the hard disk. This is an enormous light intensity, roughly ˜100,000,000x the intensity of sunlight on the earth's surface! This power density is ˜1,000x the output of gold-coated tapered optical fibers used in Near-field Scanning Optical Microscopes (NSOM), which is the incumbent technology allowing the focus of light to the nano-scale. Even in these lower power NSOM probe tips, optical self-heating and deformation of the nano- gold tips are significant reliability and performance bottlenecks. Hence, the design and manufacture of the higher power optical nano-focusing system for HAMR must overcome great engineering challenges in optical and thermal performance. There has been much debate about alternative materials for metal-optics and plasmonics to cure the current plague of optical loss and thermal reliability in this burgeoning field. We clear the air. For an application like HAMR, where intense self-heating occurs, refractory metals and metals nitrides with high melting points but low optical and thermal conductivities are inferior to noble metals. This conclusion is contradictory to several claims and may be counter-intuitive to some, but the analysis is simple, evident and relevant to any engineer working on metal-optics and plasmonics. Indeed, the best metals for DC and RF electronics are also the best at optical frequencies. We also argue that the geometric design of electromagnetic structures (especially sub-wavelength devices) is too cumbersome for human designers, because the wave nature of light necessitates that this inverse problem be non-convex and non-linear. When the computation for one forward simulation is extremely demanding (hours on a high-performance computing cluster), typical designers constrain themselves to only 2 or 3 degrees of freedom. We attack the inverse electromagnetic design problem using gradient-based optimization after leveraging the adjoint-method to efficiently calculate the gradient (ie. the sensitivity) of an objective function with respect to thousands to millions of parameters. This approach results in creative computational designs of electromagnetic structures that human designers could not have conceived yet yield better optical performance. After gaining key insights from the fundamental limits and building our Inverse Electromagnetic Design software, we finally attempt to solve the challenges in enabling HAMR and the future supply of digital data storage hardware. In 2014, the hard disk industry spent ˜$200 million dollars in R&D but poor optical and thermal performance of the metallic nano-transducer continues to prevent commercial HAMR product. Via our design process, we successfully computationally-generated designs for the nano-focusing system that meets specifications for higher data density, lower adjacent track interference, lower laser power requirements and, most notably, lower self-heating of the crucial metallic nano-antenna. We believe that computational design will be a crucial component in commercial HAMR as well as many other commercially significant applications of micro- and nano- optics. If successful in commercializing HAMR, the hard disk industry may sell 1 billion HDDs per year by 2025, with an average of 6 semiconductor diode lasers and 6 optical chips per drive. The key players will become the largest manufacturers of integrated optical chips and nano-antennas in the world. This industry will perform millions of single-mode laser alignments per day. (Abstract shortened by UMI.).
LIBS data analysis using a predictor-corrector based digital signal processor algorithm
NASA Astrophysics Data System (ADS)
Sanders, Alex; Griffin, Steven T.; Robinson, Aaron
2012-06-01
There are many accepted sensor technologies for generating spectra for material classification. Once the spectra are generated, communication bandwidth limitations favor local material classification with its attendant reduction in data transfer rates and power consumption. Transferring sensor technologies such as Cavity Ring-Down Spectroscopy (CRDS) and Laser Induced Breakdown Spectroscopy (LIBS) require effective material classifiers. A result of recent efforts has been emphasis on Partial Least Squares - Discriminant Analysis (PLS-DA) and Principle Component Analysis (PCA). Implementation of these via general purpose computers is difficult in small portable sensor configurations. This paper addresses the creation of a low mass, low power, robust hardware spectra classifier for a limited set of predetermined materials in an atmospheric matrix. Crucial to this is the incorporation of PCA or PLS-DA classifiers into a predictor-corrector style implementation. The system configuration guarantees rapid convergence. Software running on multi-core Digital Signal Processor (DSPs) simulates a stream-lined plasma physics model estimator, reducing Analog-to-Digital (ADC) power requirements. This paper presents the results of a predictorcorrector model implemented on a low power multi-core DSP to perform substance classification. This configuration emphasizes the hardware system and software design via a predictor corrector model that simultaneously decreases the sample rate while performing the classification.
Autonomous target tracking of UAVs based on low-power neural network hardware
NASA Astrophysics Data System (ADS)
Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe
2014-05-01
Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
Data compression using Chebyshev transform
NASA Technical Reports Server (NTRS)
Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)
2007-01-01
The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
Beam and Plasma Physics Research
1990-06-01
La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23
3-D Electromagnetic field analysis of wireless power transfer system using K computer
NASA Astrophysics Data System (ADS)
Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi
2018-05-01
We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.
Optimized blind gamma-ray pulsar searches at fixed computing budget
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de
The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less
Fast non-Abelian geometric gates via transitionless quantum driving.
Zhang, J; Kyaw, Thi Ha; Tong, D M; Sjöqvist, Erik; Kwek, Leong-Chuan
2015-12-21
A practical quantum computer must be capable of performing high fidelity quantum gates on a set of quantum bits (qubits). In the presence of noise, the realization of such gates poses daunting challenges. Geometric phases, which possess intrinsic noise-tolerant features, hold the promise for performing robust quantum computation. In particular, quantum holonomies, i.e., non-Abelian geometric phases, naturally lead to universal quantum computation due to their non-commutativity. Although quantum gates based on adiabatic holonomies have already been proposed, the slow evolution eventually compromises qubit coherence and computational power. Here, we propose a general approach to speed up an implementation of adiabatic holonomic gates by using transitionless driving techniques and show how such a universal set of fast geometric quantum gates in a superconducting circuit architecture can be obtained in an all-geometric approach. Compared with standard non-adiabatic holonomic quantum computation, the holonomies obtained in our approach tends asymptotically to those of the adiabatic approach in the long run-time limit and thus might open up a new horizon for realizing a practical quantum computer.
Fast non-Abelian geometric gates via transitionless quantum driving
Zhang, J.; Kyaw, Thi Ha; Tong, D. M.; Sjöqvist, Erik; Kwek, Leong-Chuan
2015-01-01
A practical quantum computer must be capable of performing high fidelity quantum gates on a set of quantum bits (qubits). In the presence of noise, the realization of such gates poses daunting challenges. Geometric phases, which possess intrinsic noise-tolerant features, hold the promise for performing robust quantum computation. In particular, quantum holonomies, i.e., non-Abelian geometric phases, naturally lead to universal quantum computation due to their non-commutativity. Although quantum gates based on adiabatic holonomies have already been proposed, the slow evolution eventually compromises qubit coherence and computational power. Here, we propose a general approach to speed up an implementation of adiabatic holonomic gates by using transitionless driving techniques and show how such a universal set of fast geometric quantum gates in a superconducting circuit architecture can be obtained in an all-geometric approach. Compared with standard non-adiabatic holonomic quantum computation, the holonomies obtained in our approach tends asymptotically to those of the adiabatic approach in the long run-time limit and thus might open up a new horizon for realizing a practical quantum computer. PMID:26687580
Digital optical processing of optical communications: towards an Optical Turing Machine
NASA Astrophysics Data System (ADS)
Touch, Joe; Cao, Yinwen; Ziyadi, Morteza; Almaiman, Ahmed; Mohajerin-Ariaei, Amirhossein; Willner, Alan E.
2017-01-01
Optical computing is needed to support Tb/s in-network processing in a way that unifies communication and computation using a single data representation that supports in-transit network packet processing, security, and big data filtering. Support for optical computation of this sort requires leveraging the native properties of optical wave mixing to enable computation and switching for programmability. As a consequence, data must be encoded digitally as phase (M-PSK), semantics-preserving regeneration is the key to high-order computation, and data processing at Tb/s rates requires mixing. Experiments have demonstrated viable approaches to phase squeezing and power restoration. This work led our team to develop the first serial, optical Internet hop-count decrement, and to design and simulate optical circuits for calculating the Internet checksum and multiplexing Internet packets. The current exploration focuses on limited-lookback computational models to reduce the need for permanent storage and hybrid nanophotonic circuits that combine phase-aligned comb sources, non-linear mixing, and switching on the same substrate to avoid the macroscopic effects that hamper benchtop prototypes.
Programmable computing with a single magnetoresistive element
NASA Astrophysics Data System (ADS)
Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.
2003-10-01
The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.
Will the digital computer transform classical mathematics?
Rotman, Brian
2003-08-15
Mathematics and machines have influenced each other for millennia. The advent of the digital computer introduced a powerfully new element that promises to transform the relation between them. This paper outlines the thesis that the effect of the digital computer on mathematics, already widespread, is likely to be radical and far-reaching. To articulate this claim, an abstract model of doing mathematics is introduced based on a triad of actors of which one, the 'agent', corresponds to the function performed by the computer. The model is used to frame two sorts of transformation. The first is pragmatic and involves the alterations and progressive colonization of the content and methods of enquiry of various mathematical fields brought about by digital methods. The second is conceptual and concerns a fundamental antagonism between the infinity enshrined in classical mathematics and physics (continuity, real numbers, asymptotic definitions) and the inherently real and material limit of processes associated with digital computation. An example which lies in the intersection of classical mathematics and computer science, the P=NP problem, is analysed in the light of this latter issue.
GPU-accelerated FDTD modeling of radio-frequency field-tissue interactions in high-field MRI.
Chi, Jieru; Liu, Feng; Weber, Ewald; Li, Yu; Crozier, Stuart
2011-06-01
The analysis of high-field RF field-tissue interactions requires high-performance finite-difference time-domain (FDTD) computing. Conventional CPU-based FDTD calculations offer limited computing performance in a PC environment. This study presents a graphics processing unit (GPU)-based parallel-computing framework, producing substantially boosted computing efficiency (with a two-order speedup factor) at a PC-level cost. Specific details of implementing the FDTD method on a GPU architecture have been presented and the new computational strategy has been successfully applied to the design of a novel 8-element transceive RF coil system at 9.4 T. Facilitated by the powerful GPU-FDTD computing, the new RF coil array offers optimized fields (averaging 25% improvement in sensitivity, and 20% reduction in loop coupling compared with conventional array structures of the same size) for small animal imaging with a robust RF configuration. The GPU-enabled acceleration paves the way for FDTD to be applied for both detailed forward modeling and inverse design of MRI coils, which were previously impractical.
Parallel computation with molecular-motor-propelled agents in nanofabricated networks.
Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V
2016-03-08
The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
Computer program analyzes and monitors electrical power systems (POSIMO)
NASA Technical Reports Server (NTRS)
Jaeger, K.
1972-01-01
Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.
Steady state whistler turbulence and stability of thermal barriers in tandem mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litwin, C.; Sudan, R.N.
The effect of the whistler turbulence on anisotropic electrons in a thermal barrier is examined. The electron distribution function is derived self-consistently by solving the steady state quasilinear diffusion equation. Saturated amplitudes are computed using the resonance broadening theory or convective stabilization. Estimated power levels necessary for sustaining the steady state of a strongly anisotropic electron population are found to exceed by orders of magnitude the estimates based on Fokker--Planck calculations for the range of parameters of tandem mirror (TMX-U and MFTF-B) experiments (Nucl. Fusion 25, 1205 (1985)). Upper limits on the allowed degree of anisotropy for existing power densitiesmore » are calculated.« less
Bootstrap and fast wave current drive for tokamak reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehst, D.A.
1991-09-01
Using the multi-species neoclassical treatment of Hirshman and Sigmar we study steady state bootstrap equilibria with seed currents provided by low frequency (ICRF) fast waves and with additional surface current density driven by lower hybrid waves. This study applies to reactor plasmas of arbitrary aspect ratio. IN one limit the bootstrap component can supply nearly the total equilibrium current with minimal driving power (< 20 MW). However, for larger total currents considerable driving power is required (for ITER: I{sub o} = 18 MA needs P{sub FW} = 15 MW, P{sub LH} = 75 MW). A computational survey of bootstrap fractionmore » and current drive efficiency is presented. 11 refs., 8 figs.« less
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro
2017-08-01
In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.
Balancing computation and communication power in power constrained clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piga, Leonardo; Paul, Indrani; Huang, Wei
Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less
Model based analysis of piezoelectric transformers.
Hemsel, T; Priya, S
2006-12-22
Piezoelectric transformers are increasingly getting popular in the electrical devices owing to several advantages such as small size, high efficiency, no electromagnetic noise and non-flammable. In addition to the conventional applications such as ballast for back light inverter in notebook computers, camera flash, and fuel ignition several new applications have emerged such as AC/DC converter, battery charger and automobile lighting. These new applications demand high power density and wide range of voltage gain. Currently, the transformer power density is limited to 40 W/cm(3) obtained at low voltage gain. The purpose of this study was to investigate a transformer design that has the potential of providing higher power density and wider range of voltage gain. The new transformer design utilizes radial mode both at the input and output port and has the unidirectional polarization in the ceramics. This design was found to provide 30 W power with an efficiency of 98% and 30 degrees C temperature rise from the room temperature. An electro-mechanical equivalent circuit model was developed to describe the characteristics of the piezoelectric transformer. The model was found to successfully predict the characteristics of the transformer. Excellent matching was found between the computed and experimental results. The results of this study will allow to deterministically design unipoled piezoelectric transformers with specified performance. It is expected that in near future the unipoled transformer will gain significant importance in various electrical components.
System-wide power management control via clock distribution network
Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.
2015-05-19
An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.
Reducing power consumption while performing collective operations on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2011-10-18
Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-09-01
ADEPT Project: Georgia Tech is creating compact, low-profile power adapters and power bricks using materials and tools adapted from other industries and from grid-scale power applications. Adapters and bricks convert electrical energy into useable power for many types of electronic devices, including laptop computers and mobile phones. These converters are often called wall warts because they are big, bulky, and sometimes cover up an adjacent wall socket that could be used to power another electronic device. The magnetic components traditionally used to make adapters and bricks have reached their limits; they can't be made any smaller without sacrificing performance. Georgiamore » Tech is taking a cue from grid-scale power converters that use iron alloys as magnetic cores. These low-cost alloys can handle more power than other materials, but the iron must be stacked in insulated plates to maximize energy efficiency. In order to create compact, low-profile power adapters and bricks, these stacked iron plates must be extremely thin-only hundreds of nanometers in thickness, in fact. To make plates this thin, Georgia Tech is using manufacturing tools used in microelectromechanics and other small-scale industries.« less
1984-09-27
more effectively structured and transportable simulation program modules and powerful support software, are already in place for current use. The early...incorporates the various limits and conditions described for the major acceleration categories. (14) Speed Loop This module Is executed when the shaft speed...available, high confidence models and modules . A great leverage is gained by using generally available general purpose computers and associated support
A HISTORICAL PERSPECTIVE OF NUCLEAR THERMAL HYDRAULICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Auria, F; Rohatgi, Upendra S.
The nuclear thermal-hydraulics discipline was developed following the needs for nuclear power plants (NPPs) and, to a more limited extent, research reactors (RR) design and safety. As in all other fields where analytical methods are involved, nuclear thermal-hydraulics took benefit of the development of computers. Thermodynamics, rather than fluid dynamics, is at the basis of the development of nuclear thermal-hydraulics together with the experiments in complex two-phase situations, namely, geometry, high thermal density, and pressure.
An ISVD-based Euclidian structure from motion for smartphones
NASA Astrophysics Data System (ADS)
Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F.
2014-06-01
The development of Mobile Mapping systems over the last decades allowed to quickly collect georeferenced spatial measurements by means of sensors mounted on mobile vehicles. Despite the large number of applications that can potentially take advantage of such systems, because of their cost their use is currently typically limited to certain specialized organizations, companies, and Universities. However, the recent worldwide diffusion of powerful mobile devices typically embedded with GPS, Inertial Navigation System (INS), and imaging sensors is enabling the development of small and compact mobile mapping systems. More specifically, this paper considers the development of a 3D reconstruction system based on photogrammetry methods for smartphones (or other similar mobile devices). The limited computational resources available in such systems and the users' request for real time reconstructions impose very stringent requirements on the computational burden of the 3D reconstruction procedure. This work takes advantage of certain recently developed mathematical tools (incremental singular value decomposition) and of photogrammetry techniques (structure from motion, Tomasi-Kanade factorization) to access very computationally efficient Euclidian 3D reconstruction of the scene. Furthermore, thanks to the presence of instrumentation for localization embedded in the device, the obtained 3D reconstruction can be properly georeferenced.
Molecular simulation of small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2012-11-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
2-vertex Lorentzian spin foam amplitudes for dipole transitions
NASA Astrophysics Data System (ADS)
Sarno, Giorgio; Speziale, Simone; Stagno, Gabriele V.
2018-04-01
We compute transition amplitudes between two spin networks with dipole graphs, using the Lorentzian EPRL model with up to two (non-simplicial) vertices. We find power-law decreasing amplitudes in the large spin limit, decreasing faster as the complexity of the foam increases. There are no oscillations nor asymptotic Regge actions at the order considered, nonetheless the amplitudes still induce non-trivial correlations. Spin correlations between the two dipoles appear only when one internal face is present in the foam. We compute them within a mini-superspace description, finding positive correlations, decreasing in value with the Immirzi parameter. The paper also provides an explicit guide to computing Lorentzian amplitudes using the factorisation property of SL(2,C) Clebsch-Gordan coefficients in terms of SU(2) ones. We discuss some of the difficulties of non-simplicial foams, and provide a specific criterion to partially limit the proliferation of diagrams. We systematically compare the results with the simplified EPRLs model, much faster to evaluate, to learn evidence on when it provides reliable approximations of the full amplitudes. Finally, we comment on implications of our results for the physics of non-simplicial spin foams and their resummation.
Power Mobility Training for Young Children with Multiple, Severe Impairments: A Case Series.
Kenyon, Lisa K; Farris, John P; Gallagher, Cailee; Hammond, Lyndsay; Webster, Lauren M; Aldrich, Naomi J
2017-02-01
Young children with neurodevelopmental conditions are often limited in their ability to explore and learn from their environment. The purposes of this case series were to (1) describe the outcomes of using an alternative power mobility device with young children who had multiple, severe impairments; (2) develop power mobility training methods for use with these children; and (3) determine the feasibility of using various outcome measures. Three children with cerebral palsy (Gross Motor Function Classification System Levels IV, V, and V) ages 17 months to 3.5 years participated in the case series. Examination included the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test (PEDI-CAT) and the Dimensions of Mastery Questionnaire (DMQ). An individualized, engaging power mobility training environment was created for each participant. Intervention was provided for 60 minutes per week over 12 weeks. All participants exhibited improvements in power mobility skills. Post-intervention PEDI-CAT scores increased in various domains for all participants. Post-intervention DMQ scores improved in Participants 1 and 2. The participants appeared to make improvements in their beginning power mobility skills. Additional research is planned to further explore the impact of power mobility training in this unique population.
Price schedules coordination for electricity pool markets
NASA Astrophysics Data System (ADS)
Legbedji, Alexis Motto
2002-04-01
We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.
NASA Astrophysics Data System (ADS)
Zhu, G.; Whitehead, D.; Perrie, W.; Allegre, O. J.; Olle, V.; Li, Q.; Tang, Y.; Dawson, K.; Jin, Y.; Edwardson, S. P.; Li, L.; Dearden, G.
2018-03-01
Spatial light modulators (SLMs) addressed with computer generated holograms (CGHs) can create structured light fields on demand when an incident laser beam is diffracted by a phase CGH. The power handling limitations of these devices based on a liquid crystal layer has always been of some concern. With careful engineering of chip thermal management, we report the detailed optical phase and temperature response of a liquid cooled SLM exposed to picosecond laser powers up to 〈P〉 = 220 W at 1064 nm. This information is critical for determining device performance at high laser powers. SLM chip temperature rose linearly with incident laser exposure, increasing by only 5 °C at 〈P〉 = 220 W incident power, measured with a thermal imaging camera. Thermal response time with continuous exposure was 1-2 s. The optical phase response with incident power approaches 2π radians with average power up to 〈P〉 = 130 W, hence the operational limit, while above this power, liquid crystal thickness variations limit phase response to just over π radians. Modelling of the thermal and phase response with exposure is also presented, supporting experimental observations well. These remarkable performance characteristics show that liquid crystal based SLM technology is highly robust when efficiently cooled. High speed, multi-beam plasmonic surface micro-structuring at a rate R = 8 cm2 s-1 is achieved on polished metal surfaces at 〈P〉 = 25 W exposure while diffractive, multi-beam surface ablation with average power 〈P〉 =100 W on stainless steel is demonstrated with ablation rate of ~4 mm3 min-1. However, above 130 W, first order diffraction efficiency drops significantly in accord with the observed operational limit. Continuous exposure for a period of 45 min at a laser power of 〈P〉 = 160 W did not result in any detectable drop in diffraction efficiency, confirmed afterwards by the efficient parallel beam processing at 〈P〉 = 100 W. Hence, no permanent changes in SLM phase response characteristics have been detected. This research work will help to accelerate the use of liquid crystal spatial light modulators for both scientific and ultra high throughput laser-materials micro-structuring applications.
Kinetic Inductance Memory Cell and Architecture for Superconducting Computers
NASA Astrophysics Data System (ADS)
Chen, George J.
Josephson memory devices typically use a superconducting loop containing one or more Josephson junctions to store information. The magnetic inductance of the loop in conjunction with the Josephson junctions provides multiple states to store data. This thesis shows that replacing the magnetic inductor in a memory cell with a kinetic inductor can lead to a smaller cell size. However, magnetic control of the cells is lost. Thus, a current-injection based architecture for a memory array has been designed to work around this problem. The isolation between memory cells that magnetic control provides is provided through resistors in this new architecture. However, these resistors allow leakage current to flow which ultimately limits the size of the array due to power considerations. A kinetic inductance memory array will be limited to 4K bits with a read access time of 320 ps for a 1 um linewidth technology. If a power decoder could be developed, the memory architecture could serve as the blueprint for a fast (<1 ns), large scale (>1 Mbit) superconducting memory array.
A Study on Group Key Agreement in Sensor Network Environments Using Two-Dimensional Arrays
Jang, Seung-Jae; Lee, Young-Gu; Lee, Kwang-Hyung; Kim, Tai-Hoon; Jun, Moon-Seog
2011-01-01
These days, with the emergence of the concept of ubiquitous computing, sensor networks that collect, analyze and process all the information through the sensors have become of huge interest. However, sensor network technology fundamentally has wireless communication infrastructure as its foundation and thus has security weakness and limitations such as low computing capacity, power supply limitations and price. In this paper, and considering the characteristics of the sensor network environment, we propose a group key agreement method using a keyset pre-distribution of two-dimension arrays that should minimize the exposure of key and personal information. The key collision problems are resolved by utilizing a polygonal shape’s center of gravity. The method shows that calculating a polygonal shape’s center of gravity only requires a very small amount of calculations from the users. The simple calculation not only increases the group key generation efficiency, but also enhances the sense of security by protecting information between nodes. PMID:22164072
Exploring similarities among many species distributions
Simmerman, Scott; Wang, Jingyuan; Osborne, James; Shook, Kimberly; Huang, Jian; Godsoe, William; Simons, Theodore R.
2012-01-01
Collecting species presence data and then building models to predict species distribution has been long practiced in the field of ecology for the purpose of improving our understanding of species relationships with each other and with the environment. Due to limitations of computing power as well as limited means of using modeling software on HPC facilities, past species distribution studies have been unable to fully explore diverse data sets. We build a system that can, for the first time to our knowledge, leverage HPC to support effective exploration of species similarities in distribution as well as their dependencies on common environmental conditions. Our system can also compute and reveal uncertainties in the modeling results enabling domain experts to make informed judgments about the data. Our work was motivated by and centered around data collection efforts within the Great Smoky Mountains National Park that date back to the 1940s. Our findings present new research opportunities in ecology and produce actionable field-work items for biodiversity management personnel to include in their planning of daily management activities.
Mass Storage and Retrieval at Rome Laboratory
NASA Technical Reports Server (NTRS)
Kann, Joshua L.; Canfield, Brady W.; Jamberdino, Albert A.; Clarke, Bernard J.; Daniszewski, Ed; Sunada, Gary
1996-01-01
As the speed and power of modern digital computers continues to advance, the demands on secondary mass storage systems grow. In many cases, the limitations of existing mass storage reduce the overall effectiveness of the computing system. Image storage and retrieval is one important area where improved storage technologies are required. Three dimensional optical memories offer the advantage of large data density, on the order of 1 Tb/cm(exp 3), and faster transfer rates because of the parallel nature of optical recording. Such a system allows for the storage of multiple-Gbit sized images, which can be recorded and accessed at reasonable rates. Rome Laboratory is currently investigating several techniques to perform three-dimensional optical storage including holographic recording, two-photon recording, persistent spectral-hole burning, multi-wavelength DNA recording, and the use of bacteriorhodopsin as a recording material. In this paper, the current status of each of these on-going efforts is discussed. In particular, the potential payoffs as well as possible limitations are addressed.
NASA Astrophysics Data System (ADS)
Choi, Shinhyun; Tan, Scott H.; Li, Zefan; Kim, Yunjo; Choi, Chanyeol; Chen, Pai-Yu; Yeon, Hanwool; Yu, Shimeng; Kim, Jeehwan
2018-01-01
Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.
Dynamic emulation modelling for the optimal operation of water systems: an overview
NASA Astrophysics Data System (ADS)
Castelletti, A.; Galelli, S.; Giuliani, M.
2014-12-01
Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.
NASA Astrophysics Data System (ADS)
Ardanuy, Antoni; Comerón, Adolfo
2018-04-01
We analyze the practical limits of a lidar system based on the use of a laser diode, random binary continuous wave power modulation, and an avalanche photodiode (APD)-based photereceiver, combined with the control and computing power of the digital signal processors (DSP) currently available. The target is to design a compact portable lidar system made all in semiconductor technology, with a low-power demand and an easy configuration of the system, allowing change in some of its features through software. Unlike many prior works, we emphasize the use of APDs instead of photomultiplier tubes to detect the return signal and the application of the system to measure not only hard targets, but also medium-range aerosols and clouds. We have developed an experimental prototype to evaluate the behavior of the system under different environmental conditions. Experimental results provided by the prototype are presented and discussed.
Design of a tokamak fusion reactor first wall armor against neutral beam impingement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, R.A.
1977-12-01
The maximum temperatures and thermal stresses are calculated for various first wall design proposals, using both analytical solutions and the TRUMP and SAP IV Computer Codes. Beam parameters, such as pulse time, cycle time, and beam power, are varied. It is found that uncooled plates should be adequate for near-term devices, while cooled protection will be necessary for fusion power reactors. Graphite and tungsten are selected for analysis because of their desirable characteristics. Graphite allows for higher heat fluxes compared to tungsten for similar pulse times. Anticipated erosion (due to surface effects) and plasma impurity fraction are estimated. Neutron irradiationmore » damage is also discussed. Neutron irradiation damage (rather than erosion, fatigue, or creep) is estimated to be the lifetime-limiting factor on the lifetime of the component in fusion power reactors. It is found that the use of tungsten in fusion power reactors, when directly exposed to the plasma, will cause serious plasma impurity problems; graphite should not present such an impurity problem.« less
Interplanetary Magnetic Field Power Spectrum Variations: A VHO Enabled Study
NASA Astrophysics Data System (ADS)
Szabo, A.; Koval, A.; Merka, J.; Narock, T. W.
2010-12-01
The newly reprocessed high time resolution (11/22 vectors/sec) Wind mission interplanetary magnetic field data and the solar wind key parameter search capability of the Virtual Heliospheric Observatory (VHO) affords an opportunity to study magnetic field power spectral density variations as a function of solar wind conditions. In the reprocessed Wind Magnetic Field Investigation (MFI) data, the spin tone and its harmonics are greatly reduced that allows the meaningful fitting of power spectra to the ~2 Hz limit above which digitization noise becomes apparent. The power spectral density is computed and the spectral index is fitted for the MHD and ion inertial regime separately along with the break point between the two for various solar wind conditions . The time periods of fixed solar wind conditions are obtained from VHO searches that greatly simplify the process. The functional dependence of the ion inertial spectral index and break point on solar wind plasma and magnetic field conditions will be discussed.
Vascular surgical data registries for small computers.
Kaufman, J L; Rosenberg, N
1984-08-01
Recent designs for computer-based vascular surgical registries and clinical data bases have employed large centralized systems with formal programming and mass storage. Small computers, of the types created for office use or for word processing, now contain sufficient speed and memory storage capacity to allow construction of decentralized office-based registries. Using a standardized dictionary of terms and a method of data organization adapted to word processing, we have created a new vascular surgery data registry, "VASREG." Data files are organized without programming, and a limited number of powerful logical statements in English are used for sorting. The capacity is 25,000 records with current inexpensive memory technology. VASREG is adaptable to computers made by a variety of manufacturers, and interface programs are available for conversion of the word processor formated registry data into forms suitable for analysis by programs written in a standard programming language. This is a low-cost clinical data registry available to any physician. With a standardized dictionary, preparation of regional and national statistical summaries may be facilitated.
Extending the length and time scales of Gram-Schmidt Lyapunov vector computations
NASA Astrophysics Data System (ADS)
Costa, Anthony B.; Green, Jason R.
2013-08-01
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
Analysis of the return to power scenario following a LBLOCA in a PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macian, R.; Tyler, T.N.; Mahaffy, J.H.
1995-09-01
The risk of reactivity accidents has been considered an important safety issue since the beginning of the nuclear power industry. In particular, several events leading to such scenarios for PWR`s have been recognized and studied to assess the potential risk of fuel damage. The present paper analyzes one such event: the possible return to power during the reflooding phase following a LBLOCA. TRAC-PF1/MOD2 coupled with a three-dimensional neutronic model of the core based on the Nodal Expansion Method (NEM) was used to perform the analysis. The system computer model contains a detailed representation of a complete typical 4-loop PWR. Thus,more » the simulation can follow complex system interactions during reflooding, which may influence the neutronics feedback in the core. Analyses were made with core models bases on cross sections generated by LEOPARD. A standard and a potentially more limiting case, with increased pressurizer and accumulator inventories, were run. In both simulations, the reactor reaches a stable state after the reflooding is completed. The lower core region, filled with cold water, generates enough power to boil part of the incoming liquid, thus preventing the core average liquid fraction from reaching a value high enough to cause a return to power. At the same time, the mass flow rate through the core is adequate to maintain the rod temperature well below the fuel damage limit.« less
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen
1999-01-01
Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.
Valtonen, Anu; Pöyhönen, Tapani; Sipilä, Sarianna; Heinonen, Ari
2010-06-01
To study the effects of aquatic resistance training on mobility, muscle power, and cross-sectional area. Randomized controlled trial. Research laboratory and hospital rehabilitation pool. Population-based sample (N=50) of eligible women and men 55 to 75 years old 4 to 18 months after unilateral knee replacement with no contraindications who were willing to participate in the trial. Twelve-week progressive aquatic resistance training (n=26) or no intervention (n=24). Mobility limitation assessed by walking speed and stair ascending time, and self-reported physical functional difficulty, pain, and stiffness assessed by Western Ontario and McMaster University Osteoarthritis Index (WOMAC) questionnaire. Knee extensor power and knee flexor power assessed isokinetically, and thigh muscle cross-sectional area (CSA) by computed tomography. Compared with the change in the control group, habitual walking speed increased by 9% (P=.005) and stair ascending time decreased by 15% (P=.006) in the aquatic training group. There was no significant difference between the groups in the WOMAC scores. The training increased knee extensor power by 32% (P<.001) in the operated and 10% (P=.001) in the nonoperated leg, and knee flexor power by 48% (P=.003) in the operated and 8% (P=.002) in the nonoperated leg compared with controls. The mean increase in thigh muscle CSA of the operated leg was 3% (P=.018) and that of the nonoperated leg 2% (P=.019) after training compared with controls. Progressive aquatic resistance training had favorable effects on mobility limitation by increasing walking speed and decreasing stair ascending time. In addition, training increased lower limb muscle power and muscle CSA. Resistance training in water is a feasible mode of rehabilitation that has wide-ranging positive effects on patients after knee replacement surgery. Copyright 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262
Active Flash: Out-of-core Data Analytics on Flash Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-01-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-11-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.
Alpha Control - A new Concept in SPM Control
NASA Astrophysics Data System (ADS)
Spizig, P.; Sanchen, D.; Volswinkler, G.; Ibach, W.; Koenen, J.
2006-03-01
Controlling modern Scanning Probe Microscopes demands highly sophisticated electronics. While flexibility and powerful computing power is of great importance in facilitating the variety of measurement modes, extremely low noise is also a necessity. Accordingly, modern SPM Controller designs are based on digital electronics to overcome the drawbacks of analog designs. While todays SPM controllers are based on DSPs or Microprocessors and often still incorporate analog parts, we are now introducing a completely new approach: Using a Field Programmable Gate Array (FPGA) to implement the digital control tasks allows unrivalled data processing speed by computing all tasks in parallel within a single chip. Time consuming task switching between data acquisition, digital filtering, scanning and the computing of feedback signals can be completely avoided. Together with a star topology to avoid any bus limitations in accessing the variety of ADCs and DACs, this design guarantees for the first time an entirely deterministic timing capability in the nanosecond regime for all tasks. This becomes especially useful for any external experiments which must be synchronized with the scan or for high speed scans that require not only closed loop control of the scanner, but also dynamic correction of the scan movement. Delicate samples additionally benefit from extremely high sample rates, allowing highly resolved signals and low noise levels.
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2001-04-01
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations Joseph W. Rudmin (Physics Dept, James Madison University) A new system of solving systems of differential equations will be presented, which has been developed by J. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces MacClaurin Series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The method yields high-degree solutions: 20th degree is easily obtainable. It is conceptually simple, fast, and extremely general. It has been applied to over a hundred systems of differential equations, some of which were previously unsolved, and has yet to fail to solve any system for which the MacClaurin series converges. The method is non-recursive: each coefficient in the series is calculated just once, in closed form, and its accuracy is limited only by the digital accuracy of the computer. Although the original differential equations may include any mathematical functions, the computational method includes ONLY the operations of addition, subtraction, and multiplication. Furthermore, it is perfectly suited to parallel -processing computer languages. Those who learn this system will never use Runge-Kutta or predictor-corrector methods again. Examples will be presented, including the classical many-body problem.
GPS synchronized power system phase angle measurements
NASA Astrophysics Data System (ADS)
Wilson, Robert E.; Sterlina, Patrick S.
1994-09-01
This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.
Avalanche statistics from data with low time resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Coherence-generating power of quantum dephasing processes
NASA Astrophysics Data System (ADS)
Styliaris, Georgios; Campos Venuti, Lorenzo; Zanardi, Paolo
2018-03-01
We provide a quantification of the capability of various quantum dephasing processes to generate coherence out of incoherent states. The measures defined, admitting computable expressions for any finite Hilbert-space dimension, are based on probabilistic averages and arise naturally from the viewpoint of coherence as a resource. We investigate how the capability of a dephasing process (e.g., a nonselective orthogonal measurement) to generate coherence depends on the relevant bases of the Hilbert space over which coherence is quantified and the dephasing process occurs, respectively. We extend our analysis to include those Lindblad time evolutions which, in the infinite-time limit, dephase the system under consideration and calculate their coherence-generating power as a function of time. We further identify specific families of such time evolutions that, although dephasing, have optimal (over all quantum processes) coherence-generating power for some intermediate time. Finally, we investigate the coherence-generating capability of random dephasing channels.
Quantitative Evaluation Method of Each Generation Margin for Power System Planning
NASA Astrophysics Data System (ADS)
Su, Su; Tanaka, Kazuyuki
As the power system deregulation advances, the competition among the power companies becomes heated, and they seek more efficient system planning using existing facilities. Therefore, an efficient system planning method has been expected. This paper proposes a quantitative evaluation method for the (N-1) generation margin considering the overload and the voltage stability restriction. Concerning the generation margin related with the overload, a fast solution method without the recalculation of the (N-1) Y-matrix is proposed. Referred to the voltage stability, this paper proposes an efficient method to search the stability limit. The IEEE30 model system which is composed of 6 generators and 14 load nodes is employed to validate the proposed method. According to the results, the proposed method can reduce the computational cost for the generation margin related with the overload under the (N-1) condition, and specify the value quantitatively.
Avalanche statistics from data with low time resolution
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...
2016-11-22
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Computationally guided discovery of thermoelectric materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorai, Prashun; Stevanović, Vladan; Toberer, Eric S.
The potential for advances in thermoelectric materials, and thus solid-state refrigeration and power generation, is immense. Progress so far has been limited by both the breadth and diversity of the chemical space and the serial nature of experimental work. In this Review, we discuss how recent computational advances are revolutionizing our ability to predict electron and phonon transport and scattering, as well as materials dopability, and we examine efficient approaches to calculating critical transport properties across large chemical spaces. When coupled with experimental feedback, these high-throughput approaches can stimulate the discovery of new classes of thermoelectric materials. Within smaller materialsmore » subsets, computations can guide the optimal chemical and structural tailoring to enhance materials performance and provide insight into the underlying transport physics. Beyond perfect materials, computations can be used for the rational design of structural and chemical modifications (such as defects, interfaces, dopants and alloys) to provide additional control on transport properties to optimize performance. Through computational predictions for both materials searches and design, a new paradigm in thermoelectric materials discovery is emerging.« less
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Computationally guided discovery of thermoelectric materials
Gorai, Prashun; Stevanović, Vladan; Toberer, Eric S.
2017-08-22
The potential for advances in thermoelectric materials, and thus solid-state refrigeration and power generation, is immense. Progress so far has been limited by both the breadth and diversity of the chemical space and the serial nature of experimental work. In this Review, we discuss how recent computational advances are revolutionizing our ability to predict electron and phonon transport and scattering, as well as materials dopability, and we examine efficient approaches to calculating critical transport properties across large chemical spaces. When coupled with experimental feedback, these high-throughput approaches can stimulate the discovery of new classes of thermoelectric materials. Within smaller materialsmore » subsets, computations can guide the optimal chemical and structural tailoring to enhance materials performance and provide insight into the underlying transport physics. Beyond perfect materials, computations can be used for the rational design of structural and chemical modifications (such as defects, interfaces, dopants and alloys) to provide additional control on transport properties to optimize performance. Through computational predictions for both materials searches and design, a new paradigm in thermoelectric materials discovery is emerging.« less
Graphics processing units in bioinformatics, computational biology and systems biology.
Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela
2017-09-01
Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.
Quantum phases with differing computational power.
Cui, Jian; Gu, Mile; Kwek, Leong Chuan; Santos, Marcelo França; Fan, Heng; Vedral, Vlatko
2012-05-01
The observation that concepts from quantum information has generated many alternative indicators of quantum phase transitions hints that quantum phase transitions possess operational significance with respect to the processing of quantum information. Yet, studies on whether such transitions lead to quantum phases that differ in their capacity to process information remain limited. Here we show that there exist quantum phase transitions that cause a distinct qualitative change in our ability to simulate certain quantum systems under perturbation of an external field by local operations and classical communication. In particular, we show that in certain quantum phases of the XY model, adiabatic perturbations of the external magnetic field can be simulated by local spin operations, whereas the resulting effect within other phases results in coherent non-local interactions. We discuss the potential implications to adiabatic quantum computation, where a computational advantage exists only when adiabatic perturbation results in coherent multi-body interactions.
Computational predictions of energy materials using density functional theory
NASA Astrophysics Data System (ADS)
Jain, Anubhav; Shin, Yongwoo; Persson, Kristin A.
2016-01-01
In the search for new functional materials, quantum mechanics is an exciting starting point. The fundamental laws that govern the behaviour of electrons have the possibility, at the other end of the scale, to predict the performance of a material for a targeted application. In some cases, this is achievable using density functional theory (DFT). In this Review, we highlight DFT studies predicting energy-related materials that were subsequently confirmed experimentally. The attributes and limitations of DFT for the computational design of materials for lithium-ion batteries, hydrogen production and storage materials, superconductors, photovoltaics and thermoelectric materials are discussed. In the future, we expect that the accuracy of DFT-based methods will continue to improve and that growth in computing power will enable millions of materials to be virtually screened for specific applications. Thus, these examples represent a first glimpse of what may become a routine and integral step in materials discovery.
Combining dynamical decoupling with fault-tolerant quantum computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, Hui Khoon; Preskill, John; Lidar, Daniel A.
2011-07-15
We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprotected gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise and have a lower overhead cost than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer and can be expressed either in terms of the operator norm of the bath's Hamiltonian or in terms of themore » power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.« less
Restricted Authentication and Encryption for Cyber-physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkpatrick, Michael S; Bertino, Elisa; Sheldon, Frederick T
2009-01-01
Cyber-physical systems (CPS) are characterized by the close linkage of computational resources and physical devices. These systems can be deployed in a number of critical infrastructure settings. As a result, the security requirements of CPS are different than traditional computing architectures. For example, critical functions must be identified and isolated from interference by other functions. Similarly, lightweight schemes may be required, as CPS can include devices with limited computing power. One approach that offers promise for CPS security is the use of lightweight, hardware-based authentication. Specifically, we consider the use of Physically Unclonable Functions (PUFs) to bind an access requestmore » to specific hardware with device-specific keys. PUFs are implemented in hardware, such as SRAM, and can be used to uniquely identify the device. This technology could be used in CPS to ensure location-based access control and encryption, both of which would be desirable for CPS implementations.« less
Parallelization of the FLAPW method
NASA Astrophysics Data System (ADS)
Canning, A.; Mannstadt, W.; Freeman, A. J.
2000-08-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.
A spatially localized architecture for fast and modular DNA computing
NASA Astrophysics Data System (ADS)
Chatterjee, Gourab; Dalchau, Neil; Muscat, Richard A.; Phillips, Andrew; Seelig, Georg
2017-09-01
Cells use spatial constraints to control and accelerate the flow of information in enzyme cascades and signalling networks. Synthetic silicon-based circuitry similarly relies on spatial constraints to process information. Here, we show that spatial organization can be a similarly powerful design principle for overcoming limitations of speed and modularity in engineered molecular circuits. We create logic gates and signal transmission lines by spatially arranging reactive DNA hairpins on a DNA origami. Signal propagation is demonstrated across transmission lines of different lengths and orientations and logic gates are modularly combined into circuits that establish the universality of our approach. Because reactions preferentially occur between neighbours, identical DNA hairpins can be reused across circuits. Co-localization of circuit elements decreases computation time from hours to minutes compared to circuits with diffusible components. Detailed computational models enable predictive circuit design. We anticipate our approach will motivate using spatial constraints for future molecular control circuit designs.
Personal Computer-less (PC-less) Microcontroller Training Kit
NASA Astrophysics Data System (ADS)
Somantri, Y.; Wahyudin, D.; Fushilat, I.
2018-02-01
The need of microcontroller training kit is necessary for practical work of students of electrical engineering education. However, to use available training kit not only costly but also does not meet the need of laboratory requirements. An affordable and portable microcontroller kit could answer such problem. This paper explains the design and development of Personal Computer Less (PC-Less) Microcontroller Training Kit. It was developed based on Lattepanda processor and Arduino microcontroller as target. The training kit equipped with advanced input-output interfaces that adopted the concept of low cost and low power system. The preliminary usability testing proved this device can be used as a tool for microcontroller programming and industrial automation training. By adopting the concept of portability, the device could be operated in the rural area which electricity and computer infrastructure are limited. Furthermore, the training kit is suitable for student of electrical engineering student from university and vocational high school.
Comparison of SOM point densities based on different criteria.
Kohonen, T
1999-11-15
Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.
The Design and Analysis of Transposon-Insertion Sequencing Experiments
Chao, Michael C.; Abel, Sören; Davis, Brigid M.; Waldor, Matthew K.
2016-01-01
Preface Transposon-insertion sequencing (TIS) is a powerful approach that can be widely applied to genome-wide definition of loci that are required for growth in diverse conditions. However, experimental design choices and stochastic biological processes can heavily influence the results of TIS experiments and affect downstream statistical analysis. Here, we discuss TIS experimental parameters and how these factors relate to the benefits and limitations of the various statistical frameworks that can be applied to computational analysis of TIS data. PMID:26775926
Study of an engine flow diverter system for a large scale ejector powered aircraft model
NASA Technical Reports Server (NTRS)
Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.
1981-01-01
Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.
NASA Technical Reports Server (NTRS)
Tinling, B. E.
1977-01-01
Estimates of the effectiveness of a model following type control system in reducing the roll excursion due to a wake vortex encounter were obtained from single degree of freedom computations with inputs derived from the results of wind tunnel, flight, and simulation experiments. The analysis indicates that the control power commanded by the automatic system must be roughly equal to the vortex induced roll acceleration if effective limiting of the maximum bank angle is to be achieved.
Characterization of photochromic computer-generated holograms for optical testing
NASA Astrophysics Data System (ADS)
Pariani, Giorgio; Bertarelli, Chiara; Bianco, Andrea; Schaal, Frederik; Pruss, Christof
2012-09-01
We investigate the possibility to produce photochromic CGHs with maskless lithography methods. For this purpose, optical properties and requirements of photochromic materials will be shown. A diarylethene-based polyurethane is developed and characterized. The resolution limit and the in uence of the writing parameters on the produced patterns, namely speed rate and light power, have been determined. After the optimization of the writing process, gratings and Fresnel Zone Plates are produced on the photochromic layer and diraction eciencies are measured. Improvements and perspectives will be discussed.
LWT Based Sensor Node Signal Processing in Vehicle Surveillance Distributed Sensor Network
NASA Astrophysics Data System (ADS)
Cha, Daehyun; Hwang, Chansik
Previous vehicle surveillance researches on distributed sensor network focused on overcoming power limitation and communication bandwidth constraints in sensor node. In spite of this constraints, vehicle surveillance sensor node must have signal compression, feature extraction, target localization, noise cancellation and collaborative signal processing with low computation and communication energy dissipation. In this paper, we introduce an algorithm for light-weight wireless sensor node signal processing based on lifting scheme wavelet analysis feature extraction in distributed sensor network.
3-d finite element model development for biomechanics: a software demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollerbach, K.; Hollister, A.M.; Ashby, E.
1997-03-01
Finite element analysis is becoming an increasingly important part of biomechanics and orthopedic research, as computational resources become more powerful, and data handling algorithms become more sophisticated. Until recently, tools with sufficient power did not exist or were not accessible to adequately model complicated, three-dimensional, nonlinear biomechanical systems. In the past, finite element analyses in biomechanics have often been limited to two-dimensional approaches, linear analyses, or simulations of single tissue types. Today, we have the resources to model fully three-dimensional, nonlinear, multi-tissue, and even multi-joint systems. The authors will present the process of developing these kinds of finite element models,more » using human hand and knee examples, and will demonstrate their software tools.« less
Assessment of MCRM Boost Assist from Orbit for Deep Space Missions
NASA Technical Reports Server (NTRS)
2000-01-01
Report provides results of analysis for the beamed energy driven MHD Chemical Rocket Motor (MCRM) for application to boost from orbit to escape for deep space and interplanetary missions. Parametric analyses were performed in the mission to determine operating regime for which the MCRM provides significant propulsion performance enhancement. Analysis of the MHD accelerator was performed numerical computational methods to determine design and operational features necessary to achieve Isp on the order of 2,000 to 3,000 seconds. Algorithms were developed to scale weights for the accelerator and power supply. Significant improvement in propulsion system performance can be achieved with the beamed energy driven MCRM. The limiting factor on achievable vehicle acceleration is the specific power of the rectenna.
Compact time- and space-integrating SAR processor: design and development status
NASA Astrophysics Data System (ADS)
Haney, Michael W.; Levy, James J.; Christensen, Marc P.; Michael, Robert R., Jr.; Mock, Michael M.
1994-06-01
Progress toward a flight demonstration of the acousto-optic time- and space- integrating real-time SAR image formation processor program is reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results reported include tests of a laboratory version of the concept, a description of the compact optical design that will be implemented, and an overview of the electronic interface and controller modules of the flight-test system.
Hamilton Standard Q-fan demonstrator dynamic pitch change test program, volume 1
NASA Technical Reports Server (NTRS)
Demers, W. J.; Nelson, D. J.; Wainauski, H. S.
1975-01-01
Tests of a full scale variable pitch fan engine to obtain data on the structural characteristics, response times, and fan/core engine compatibility during transient changes in blade angle, fan rpm, and engine power is reported. Steady state reverse thrust tests with a take off nozzle configuration were also conducted. The 1.4 meter diameter, 13 bladed controllable pitch fan was driven by a T55 L 11A engine with power and blade angle coordinated by a digital computer. The tests demonstrated an ability to change from full forward thrust to reverse thrust in less than one (1) second. Reverse thrust was effected through feather and through flat pitch; structural characteristics and engine/fan compatibility were within satisfactory limits.
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
Efficient matrix approach to optical wave propagation and Linear Canonical Transforms.
Shakir, Sami A; Fried, David L; Pease, Edwin A; Brennan, Terry J; Dolash, Thomas M
2015-10-05
The Fresnel diffraction integral form of optical wave propagation and the more general Linear Canonical Transforms (LCT) are cast into a matrix transformation form. Taking advantage of recent efficient matrix multiply algorithms, this approach promises an efficient computational and analytical tool that is competitive with FFT based methods but offers better behavior in terms of aliasing, transparent boundary condition, and flexibility in number of sampling points and computational window sizes of the input and output planes being independent. This flexibility makes the method significantly faster than FFT based propagators when only a single point, as in Strehl metrics, or a limited number of points, as in power-in-the-bucket metrics, are needed in the output observation plane.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
Adaptations in Electronic Structure Calculations in Heterogeneous Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamudupula, Sai
Modern quantum chemistry deals with electronic structure calculations of unprecedented complexity and accuracy. They demand full power of high-performance computing and must be in tune with the given architecture for superior e ciency. To make such applications resourceaware, it is desirable to enable their static and dynamic adaptations using some external software (middleware), which may monitor both system availability and application needs, rather than mix science with system-related calls inside the application. The present work investigates scienti c application interlinking with middleware based on the example of the computational chemistry package GAMESS and middleware NICAN. The existing synchronous model ismore » limited by the possible delays due to the middleware processing time under the sustainable runtime system conditions. Proposed asynchronous and hybrid models aim at overcoming this limitation. When linked with NICAN, the fragment molecular orbital (FMO) method is capable of adapting statically and dynamically its fragment scheduling policy based on the computing platform conditions. Signi cant execution time and throughput gains have been obtained due to such static adaptations when the compute nodes have very di erent core counts. Dynamic adaptations are based on the main memory availability at run time. NICAN prompts FMO to postpone scheduling certain fragments, if there is not enough memory for their immediate execution. Hence, FMO may be able to complete the calculations whereas without such adaptations it aborts.« less
Open solutions to distributed control in ground tracking stations
NASA Technical Reports Server (NTRS)
Heuser, William Randy
1994-01-01
The advent of high speed local area networks has made it possible to interconnect small, powerful computers to function together as a single large computer. Today, distributed computer systems are the new paradigm for large scale computing systems. However, the communications provided by the local area network is only one part of the solution. The services and protocols used by the application programs to communicate across the network are as indispensable as the local area network. And the selection of services and protocols that do not match the system requirements will limit the capabilities, performance, and expansion of the system. Proprietary solutions are available but are usually limited to a select set of equipment. However, there are two solutions based on 'open' standards. The question that must be answered is 'which one is the best one for my job?' This paper examines a model for tracking stations and their requirements for interprocessor communications in the next century. The model and requirements are matched with the model and services provided by the five different software architectures and supporting protocol solutions. Several key services are examined in detail to determine which services and protocols most closely match the requirements for the tracking station environment. The study reveals that the protocols are tailored to the problem domains for which they were originally designed. Further, the study reveals that the process control model is the closest match to the tracking station model.
Diamond, Alan; Nowotny, Thomas; Schmuker, Michael
2016-01-01
Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Computational Power of Symmetry-Protected Topological Phases.
Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-07
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Computational Power of Symmetry-Protected Topological Phases
NASA Astrophysics Data System (ADS)
Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-01
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.
Tao Song; Linqiang Pan
2015-06-01
Spiking neural P systems (called SN P systems for short) are a class of parallel and distributed neural-like computation models inspired by the way the neurons process information and communicate with each other by means of impulses or spikes. In this work, we introduce a new variant of SN P systems, called SN P systems with rules on synapses working in maximum spiking strategy, and investigate the computation power of the systems as both number and vector generators. Specifically, we prove that i) if no limit is imposed on the number of spikes in any neuron during any computation, such systems can generate the sets of Turing computable natural numbers and the sets of vectors of positive integers computed by k-output register machine; ii) if an upper bound is imposed on the number of spikes in each neuron during any computation, such systems can characterize semi-linear sets of natural numbers as number generating devices; as vector generating devices, such systems can only characterize the family of sets of vectors computed by sequential monotonic counter machine, which is strictly included in family of semi-linear sets of vectors. This gives a positive answer to the problem formulated in Song et al., Theor. Comput. Sci., vol. 529, pp. 82-95, 2014.
Emulating a million machines to investigate botnets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudish, Donald W.
2010-06-01
Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousandsmore » or in some cases even millions of computers, making them among the world's most powerful computers for some applications.« less
Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2016-01-01
Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.
Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.
Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin
2015-01-01
Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.
Laser/lidar analysis and testing
NASA Technical Reports Server (NTRS)
Spiers, Gary D.
1994-01-01
Section 1 of this report details development of a model of the output pulse frequency spectrum of a pulsed transversely excited (TE) CO2 laser. In order to limit the computation time required, the model was designed around a generic laser pulse shape model. The use of such a procedure allows many possible laser configurations to be examined. The output pulse shape is combined with the calculated frequency chirp to produce the electric field of the output pulse which is then computationally mixed with a local oscillator field to produce the heterodyne beat signal that would fall on a detector. The power spectral density of this heterodyne signal is then calculated. Section 2 reports on a visit to the LAWS laser contractors to measure the performance of the laser breadboards. The intention was to acquire data using a digital oscilloscope so that it could be analyzed. Section 3 reports on a model developed to assess the power requirements of a 5J LAWS instrument on a Spot MKII platform in a polar orbit. The performance was assessed for three different latitude dependent sampling strategies.
Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee
NASA Astrophysics Data System (ADS)
Clerc, Thomas
With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Kirchner-Bossi, Nicolas; Porté-Agel, Fernando
2017-04-01
Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.
A Multi-Wavelength View of Planet Forming Regions: Unleashing the Full Power of ALMA
NASA Astrophysics Data System (ADS)
Tazzari, Marco
2017-11-01
Observations at sub-mm/mm wavelengths allow us to probe the solids in the interior of protoplanetary disks, where the bulk of the dust is located and planet formation is expected to occur. However, the actual size of dust grains is still largely unknown due to the limited angular resolution and sensitivity of past observations. The upgraded VLA and, especially, the ALMA observatories provide now powerful tools to resolve grain growth in disks, making the time ripe for developing a multi-wavelength analysis of sub-mm/mm observations of disks. In my contribution I will present a novel analysis method for multi-wavelength ALMA/VLA observations which, based on the self-consistent modelling of the sub-mm/mm disk continuum emission, allows us to constrain simultaneously the size distribution of dust grains and the disk's physical structure (Tazzari et al. 2016, A&A 588 A53). I will also present the recent analysis of spatially resolved ALMA Band 7 observations of a large sample of disks in the Lupus star forming region, from which we obtained a tentative evidence of a disk size-disk mass correlation (Tazzari et al. 2017, arXiv:1707.01499). Finally, I will introduce galario, a GPU Accelerated Library for the Analysis of Radio Interferometry Observations. Fitting the observed visibilities in the uv-plane is computationally demanding: with galario we solve this problem for the current as well as for the full-science ALMA capabilities by leveraging on the computing power of GPUs, providing the computational breakthrough needed to fully exploit the new wealth of information delivered by ALMA.
Photonic reservoir computing: a new approach to optical information processing
NASA Astrophysics Data System (ADS)
Vandoorne, Kristof; Fiers, Martin; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter
2010-06-01
Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently, advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks that has been successfully used in several pattern classification problems, like speech and image recognition. Thus far, most implementations have been in software, limiting their speed and power efficiency. Photonics could be an excellent platform for a hardware implementation of this concept because of its inherent parallelism and unique nonlinear behaviour. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. We propose using a network of coupled Semiconductor Optical Amplifiers (SOA) and show in simulation that it could be used as a reservoir by comparing it to conventional software implementations using a benchmark speech recognition task. In spite of the differences with classical reservoir models, the performance of our photonic reservoir is comparable to that of conventional implementations and sometimes slightly better. As our implementation uses coherent light for information processing, we find that phase tuning is crucial to obtain high performance. In parallel we investigate the use of a network of photonic crystal cavities. The coupled mode theory (CMT) is used to investigate these resonators. A new framework is designed to model networks of resonators and SOAs. The same network topologies are used, but feedback is added to control the internal dynamics of the system. By adjusting the readout weights of the network in a controlled manner, we can generate arbitrary periodic patterns.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
Polarized Sunyaev Zel'dovich tomography
NASA Astrophysics Data System (ADS)
Deutsch, Anne-Sylvie; Johnson, Matthew C.; Münchmeyer, Moritz; Terrana, Alexandra
2018-04-01
Secondary CMB polarization is induced by the late-time scattering of CMB photons by free electrons on our past light cone. This polarized Sunyaev Zel'dovich (pSZ) effect is sensitive to the electrons' locally observed CMB quadrupole, which is sourced primarily by long wavelength inhomogeneities. By combining the remote quadrupoles measured by free electrons throughout the Universe after reionization, the pSZ effect allows us to obtain additional information about large scale modes beyond what can be learned from our own last scattering surface. Here we determine the power of pSZ tomography, in which the pSZ effect is cross-correlated with the density field binned at several redshifts, to provide information about the long wavelength Universe. The signal we explore here is a power asymmetry in the cross-correlation between E or B mode CMB polarization and the density field. We compare this to the cosmic variance limited noise: the random chance to get a power asymmetry in the absence of a large scale quadrupole field. By computing the necessary transfer functions and cross-correlations, we compute the signal-to-noise ratio attainable by idealized next generation CMB experiments and galaxy surveys. We find that a signal-to-noise ratio of ~ 1‑10 is in principle attainable over a significant range of power multipoles, with the strongest signal coming from the first multipoles in the lowest redshift bins. These results prompt further assessment of realistically measuring the pSZ signal and the potential impact for constraining cosmology on large scales.
Transitioning EEG experiments away from the laboratory using a Raspberry Pi 2.
Kuziek, Jonathan W P; Shienh, Axita; Mathewson, Kyle E
2017-02-01
Electroencephalography (EEG) experiments are typically performed in controlled laboratory settings to minimise noise and produce reliable measurements. These controlled conditions also reduce the applicability of the obtained results to more varied environments and may limit their relevance to everyday situations. Advances in computer portability may increase the mobility and applicability of EEG results while decreasing costs. In this experiment we show that stimulus presentation using a Raspberry Pi 2 computer provides a low cost, reliable alternative to a traditional desktop PC in the administration of EEG experimental tasks. Significant and reliable MMN and P3 activity, typical event-related potentials (ERPs) associated with an auditory oddball paradigm, were measured while experiments were administered using the Raspberry Pi 2. While latency differences in ERP triggering were observed between systems, these differences reduced power only marginally, likely due to the reduced processing power of the Raspberry Pi 2. An auditory oddball task administered using the Raspberry Pi 2 produced similar ERPs to those derived from a desktop PC in a laboratory setting. Despite temporal differences and slight increases in trials needed for similar statistical power, the Raspberry Pi 2 can be used to design and present auditory experiments comparable to a PC. Our results show that the Raspberry Pi 2 is a low cost alternative to the desktop PC when administering EEG experiments and, due to its small size and low power consumption, will enable mobile EEG experiments unconstrained by a traditional laboratory setting. Copyright © 2016 Elsevier B.V. All rights reserved.
Launching of Active Galactic Nuclei Jets
NASA Astrophysics Data System (ADS)
Tchekhovskoy, Alexander
As black holes accrete gas, they often produce relativistic, collimated outflows, or jets. Jets are expected to form in the vicinity of a black hole, making them powerful probes of strong-field gravity. However, how jet properties (e.g., jet power) connect to those of the accretion flow (e.g., mass accretion rate) and the black hole (e.g., black hole spin) remains an area of active research. This is because what determines a crucial parameter that controls jet properties—the strength of large-scale magnetic flux threading the black hole—remains largely unknown. First-principles computer simulations show that due to this, even if black hole spin and mass accretion rate are held constant, the simulated jet powers span a wide range, with no clear winner. This limits our ability to use jets as a quantitative diagnostic tool of accreting black holes. Recent advances in computer simulations demonstrated that accretion disks can accumulate large-scale magnetic flux on the black hole, until the magnetic flux becomes so strong that it obstructs gas infall and leads to a magnetically-arrested disk (MAD). Recent evidence suggests that central black holes in jetted active galactic nuclei and tidal disruptions are surrounded by MADs. Since in MADs both the black hole magnetic flux and the jet power are at their maximum, well-defined values, this opens up a new vista in the measurements of black hole masses and spins and quantitative tests of accretion and jet theory.
Towards high-resolution mantle convection simulations
NASA Astrophysics Data System (ADS)
Höink, T.; Richards, M. A.; Lenardic, A.
2009-12-01
The motion of tectonic plates at the Earth’s surface, earthquakes, most forms of volcanism, the growth and evolution of continents, and the volatile fluxes that govern the composition and evolution of the oceans and atmosphere are all controlled by the process of solid-state thermal convection in the Earth’s rocky mantle, with perhaps a minor contribution from convection in the iron core. Similar processes govern the evolution of other planetary objects such as Mars, Venus, Titan, and Europa, all of which might conceivably shed light on the origin and evolution of life on Earth. Modeling and understanding this complicated dynamical system is one of the true “grand challenges” of Earth and planetary science. In the past three decades much progress towards understanding the dynamics of mantle convection has been made, with the increasing aid of computational modeling. Numerical sophistication has evolved significantly, and a small number of independent codes have been successfully employed. Computational power continues to increase dramatically, and with it the ability to resolve increasingly finer fluid mechanical structures. Yet, the perhaps most often cited limitation in numerical modeling based publications is still the limitation of computing power, because the ability to resolve thermal boundary layers within the convecting mantle (e.g., lithospheric plates), requires a spatial resolution of ~ 10 km. At present, the largest supercomputing facilities still barely approach the power to resolve this length scale in mantle convection simulations that include the physics necessary to model plate-like behavior. Our goal is to use supercomputing facilities to perform 3D spherical mantle convection simulations that include the ingredients for plate-like behavior, i.e. strongly temperature- and stress-dependent viscosity, at Earth-like convective vigor with a global resolution of order 10 km. In order to qualify to use such facilities, it is also necessary to demonstrate good parallel efficiency. Here we will present two kinds of results: (1) scaling properties of the community code CitcomS on DOE/NERSC's supercomputer Franklin for up to ~ 6000 processors, and (2) preliminary simulations that illustrate the role of a low-viscosity asthenosphere in plate-like behavior in mantle convection.
Calibration of radio-astronomical data on the cloud. LOFAR, the pathway to SKA
NASA Astrophysics Data System (ADS)
Sabater, J.; Sánchez-Expósito, S.; Garrido, J.; Ruiz, J. E.; Best, P. N.; Verdes-Montenegro, L.
2015-05-01
The radio interferometer LOFAR (LOw Frequency ARray) is fully operational now. This Square Kilometre Array (SKA) pathfinder allows the observation of the sky at frequencies between 10 and 240 MHz, a relatively unexplored region of the spectrum. LOFAR is a software defined telescope: the data is mainly processed using specialized software running in common computing facilities. That means that the capabilities of the telescope are virtually defined by software and mainly limited by the available computing power. However, the quantity of data produced can quickly reach huge volumes (several Petabytes per day). After the correlation and pre-processing of the data in a dedicated cluster, the final dataset is handled to the user (typically several Terabytes). The calibration of these data requires a powerful computing facility in which the specific state of the art software under heavy continuous development can be easily installed and updated. That makes this case a perfect candidate for a cloud infrastructure which adds the advantages of an on demand, flexible solution. We present our approach to the calibration of LOFAR data using Ibercloud, the cloud infrastructure provided by Ibergrid. With the calibration work-flow adapted to the cloud, we can explore calibration strategies for the SKA and show how private or commercial cloud infrastructures (Ibercloud, Amazon EC2, Google Compute Engine, etc.) can help to solve the problems with big datasets that will be prevalent in the future of astronomy.
Viscoelastic Finite Difference Modeling Using Graphics Processing Units
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.
2014-12-01
Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size and the slow memory transfers are the limiting factors of our GPU implementation. Those results show the benefits of using GPUs instead of CPUs for time based finite-difference seismic simulations. The reductions in computation time and in hardware costs are significant and open the door for new approaches in seismic inversion.
Cosmic microwave background trispectrum and primordial magnetic field limits.
Trivedi, Pranjal; Seshadri, T R; Subramanian, Kandaswamy
2012-06-08
Primordial magnetic fields will generate non-gaussian signals in the cosmic microwave background (CMB) as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. We compute a new measure of magnetic non-gaussianity, the CMB trispectrum, on large angular scales, sourced via the Sachs-Wolfe effect. The trispectra induced by magnetic energy density and by magnetic scalar anisotropic stress are found to have typical magnitudes of approximately a few times 10(-29) and 10(-19), respectively. Observational limits on CMB non-gaussianity from WMAP data allow us to conservatively set upper limits of a nG, and plausibly sub-nG, on the present value of the primordial cosmic magnetic field. This represents the tightest limit so far on the strength of primordial magnetic fields, on Mpc scales, and is better than limits from the CMB bispectrum and all modes in the CMB power spectrum. Thus, the CMB trispectrum is a new and more sensitive probe of primordial magnetic fields on large scales.
Teach Graphic Design Basics with PowerPoint
ERIC Educational Resources Information Center
Lazaros, Edward J.; Spotts, Thomas H.
2007-01-01
While PowerPoint is generally regarded as simply software for creating slide presentations, it includes often overlooked--but powerful--drawing tools. Because it is part of the Microsoft Office package, PowerPoint comes preloaded on many computers and thus is already available in many classrooms. Since most computers are not preloaded with good…
NASA Technical Reports Server (NTRS)
Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.
1974-01-01
The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.
Heterotic computing: exploiting hybrid computational devices.
Kendon, Viv; Sebald, Angelika; Stepney, Susan
2015-07-28
Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Beda, Alessandro; Simpson, David M; Faes, Luca
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Expression Templates for Truncated Power Series
NASA Astrophysics Data System (ADS)
Cary, John R.; Shasharina, Svetlana G.
1997-05-01
Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.
Systems and methods for rapid processing and storage of data
Stalzer, Mark A.
2017-01-24
Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.
1979-01-01
The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Low-power secure body area network for vital sensors toward IEEE802.15.6.
Kuroda, Masahiro; Qiu, Shuye; Tochikubo, Osamu
2009-01-01
Many healthcare/medical services have started using personal area networks, such as Bluetooth and ZigBee; these networks consist of various types of vital sensors. These works focus on generalized functions for sensor networks that expect enough battery capacity and low-power CPU/RF (Radio Frequency) modules, but less attention to easy-to-use privacy protection. In this paper, we propose a commercially-deployable secure body area network (S-BAN) with reduced computational burden on a real sensor that has limited RAM/ROM sizes and CPU/RF power consumption under a light-weight battery. Our proposed S-BAN provides vital data ordering among sensors that are involved in an S-BAN and also provides low-power networking with zero-administration security by automatic private key generation. We design and implement the power-efficient media access control (MAC) with resource-constraint security in sensors. Then, we evaluate the power efficiency of the S-BAN consisting of small sensors, such as an accessory type ECG and ring-type SpO2. The evaluation of power efficiency of the S-BAN using real sensors convinces us in deploying S-BAN and will also help us in providing feedbacks to the IEEE802.15.6 MAC, which will be the standard for BANs.
Advanced development of double-injection, deep-impurity semiconductor switches
NASA Technical Reports Server (NTRS)
Hanes, M. H.
1987-01-01
Deep-impurity, double-injection devices, commonly refered to as (DI) squared devices, represent a class of semiconductor switches possessing a very high degree of tolerance to electron and neutron irradiation and to elevated temperature operation. These properties have caused them to be considered as attractive candidates for space power applications. The design, fabrication, and testing of several varieties of (DI) squared devices intended for power switching are described. All of these designs were based upon gold-doped silicon material. Test results, along with results of computer simulations of device operation, other calculations based upon the assumed mode of operation of (DI) squared devices, and empirical information regarding power semiconductor device operation and limitations, have led to the conculsion that these devices are not well suited to high-power applications. When operated in power circuitry configurations, they exhibit high-power losses in both the off-state and on-state modes. These losses are caused by phenomena inherent to the physics and material of the devices and cannot be much reduced by device design optimizations. The (DI) squared technology may, however, find application in low-power functions such as sensing, logic, and memory, when tolerance to radiation and temperature are desirable (especially is device performance is improved by incorporation of deep-level impurities other than gold.
Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…
Wan, Kelvin H; Chong, Kelvin K L; Young, Alvin L
2015-12-08
Post-traumatic orbital reconstruction remains a surgical challenge and requires careful preoperative planning, sound anatomical knowledge and good intraoperative judgment. Computer-assisted technology has the potential to reduce error and subjectivity in the management of these complex injuries. A systematic review of the literature was conducted to explore the emerging role of computer-assisted technologies in post-traumatic orbital reconstruction, in terms of functional and safety outcomes. We searched for articles comparing computer-assisted procedures with conventional surgery and studied outcomes on diplopia, enophthalmos, or procedure-related complications. Six observational studies with 273 orbits at a mean follow-up of 13 months were included. Three out of 4 studies reported significantly fewer patients with residual diplopia in the computer-assisted group, while only 1 of the 5 studies reported better improvement in enophthalmos in the assisted group. Types and incidence of complications were comparable. Study heterogeneities limiting statistical comparison by meta-analysis will be discussed. This review highlights the scarcity of data on computer-assisted technology in orbital reconstruction. The result suggests that computer-assisted technology may offer potential advantage in treating diplopia while its role remains to be confirmed in enophthalmos. Additional well-designed and powered randomized controlled trials are much needed.
Predicting protein structures with a multiplayer online game.
Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit
2010-08-05
People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.
Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko
2013-06-18
Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.
Power spectrum for the small-scale Universe
NASA Astrophysics Data System (ADS)
Widrow, Lawrence M.; Elahi, Pascal J.; Thacker, Robert J.; Richardson, Mark; Scannapieco, Evan
2009-08-01
The first objects to arise in a cold dark matter (CDM) universe present a daunting challenge for models of structure formation. In the ultra small-scale limit, CDM structures form nearly simultaneously across a wide range of scales. Hierarchical clustering no longer provides a guiding principle for theoretical analyses and the computation time required to carry out credible simulations becomes prohibitively high. To gain insight into this problem, we perform high-resolution (N = 7203-15843) simulations of an Einstein-de Sitter cosmology where the initial power spectrum is P(k) ~ kn, with -2.5 <= n <= - 1. Self-similar scaling is established for n = -1 and -2 more convincingly than in previous, lower resolution simulations and for the first time, self-similar scaling is established for an n = -2.25 simulation. However, finite box-size effects induce departures from self-similar scaling in our n = -2.5 simulation. We compare our results with the predictions for the power spectrum from (one-loop) perturbation theory and demonstrate that the renormalization group approach suggested by McDonald improves perturbation theory's ability to predict the power spectrum in the quasi-linear regime. In the non-linear regime, our power spectra differ significantly from the widely used fitting formulae of Peacock & Dodds and Smith et al. and a new fitting formula is presented. Implications of our results for the stable clustering hypothesis versus halo model debate are discussed. Our power spectra are inconsistent with predictions of the stable clustering hypothesis in the high-k limit and lend credence to the halo model. Nevertheless, the fitting formula advocated in this paper is purely empirical and not derived from a specific formulation of the halo model.
Optimizing the Placement of Burnable Poisons in PWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Serkan; Ivanov, Kostadin; Levine, Samuel
2005-07-15
The principal focus of this work is on developing a practical tool for designing the minimum amount of burnable poisons (BPs) for a pressurized water reactor using a typical Three Mile Island Unit 1 2-yr cycle as the reference design. The results of this study are to be applied to future reload designs. A new method, the Modified Power Shape Forced Diffusion (MPSFD) method, is presented that initially computes the BP cross section to force the power distribution into a desired shape. The method employs a simple formula that expresses the BP cross section as a function of the differencemore » between the calculated radial power distributions (RPDs) and the limit set for the maximum RPD. This method places BPs into all fresh fuel assemblies (FAs) having an RPD greater than the limit. The MPSFD method then reduces the BP content by reducing the BPs in fresh FAs with the lowest RPDs. Finally, the minimum BP content is attained via a heuristic fine-tuning procedure.This new BP design program has been automated by incorporating the new MPSFD method in conjunction with the heuristic fine-tuning program. The program has automatically produced excellent results for the reference core, and has the potential to reduce fuel costs and save manpower.« less
A diffusive information preservation method for small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2013-06-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
A brain computer interface using electrocorticographic signals in humans
NASA Astrophysics Data System (ADS)
Leuthardt, Eric C.; Schalk, Gerwin; Wolpaw, Jonathan R.; Ojemann, Jeffrey G.; Moran, Daniel W.
2004-06-01
Brain-computer interfaces (BCIs) enable users to control devices with electroencephalographic (EEG) activity from the scalp or with single-neuron activity from within the brain. Both methods have disadvantages: EEG has limited resolution and requires extensive training, while single-neuron recording entails significant clinical risks and has limited stability. We demonstrate here for the first time that electrocorticographic (ECoG) activity recorded from the surface of the brain can enable users to control a one-dimensional computer cursor rapidly and accurately. We first identified ECoG signals that were associated with different types of motor and speech imagery. Over brief training periods of 3-24 min, four patients then used these signals to master closed-loop control and to achieve success rates of 74-100% in a one-dimensional binary task. In additional open-loop experiments, we found that ECoG signals at frequencies up to 180 Hz encoded substantial information about the direction of two-dimensional joystick movements. Our results suggest that an ECoG-based BCI could provide for people with severe motor disabilities a non-muscular communication and control option that is more powerful than EEG-based BCIs and is potentially more stable and less traumatic than BCIs that use electrodes penetrating the brain. The authors declare that they have no competing financial interests.
A Petaflops Era Computing Analysis
NASA Technical Reports Server (NTRS)
Preston, Frank S.
1998-01-01
This report covers a study of the potential for petaflops (1O(exp 15) floating point operations per second) computing. This study was performed within the year 1996 and should be considered as the first step in an on-going effort. 'Me analysis concludes that a petaflop system is technically feasible but not feasible with today's state-of-the-art. Since the computer arena is now a commodity business, most experts expect that a petaflops system will evolve from current technology in an evolutionary fashion. To meet the price expectations of users waiting for petaflop performance, great improvements in lowering component costs will be required. Lower power consumption is also a must. The present rate of progress in improved performance places the date of introduction of petaflop systems at about 2010. Several years before that date, it is projected that the resolution limit of chips will reach the now known resolution limit. Aside from the economic problems and constraints, software is identified as the major problem. The tone of this initial study is more pessimistic than most of the Super-published material available on petaflop systems. Workers in the field are expected to generate more data which could serve to provide a basis for a more informed projection. This report includes an annotated bibliography.
Architecture design of motion estimation for ITU-T H.263
NASA Astrophysics Data System (ADS)
Ku, Chung-Wei; Lin, Gong-Sheng; Chen, Liang-Gee; Lee, Yung-Ping
1997-01-01
Digitalized video and audio system has become the trend of the progress in multimedia, because it provides great performance in quality and feasibility of processing. However, as the huge amount of information is needed while the bandwidth is limitted, data compression plays an important role in the system. Say, for a 176 x 144 monochromic sequence with 10 frames/sec frame rate, the bandwidth is about 2Mbps. This wastes much channel resource and limits the applications. MPEG (moving picttre ezpert groip) standardizes the video codec scheme, and it performs high compression ratio while providing good quality. MPEG-i is used for the frame size about 352 x 240 and 30 frames per second, and MPEG-2 provides scalibility and can be applied on scenes with higher definition, say HDTV (high definition television). On the other hand, some applications concerns the very low bit-rate, such as videophone and video-conferencing. Because the channel bandwidth is much limitted in telephone network, a very high compression ratio must be required. ITU-T announced the H.263 video coding standards to meet the above requirements.8 According to the simulation results of TMN-5,22 it outperforms 11.263 with little overhead of complexity. Since wireless communication is the trend in the near future, low power design of the video codec is an important issue for portable visual telephone. Motion estimation is the most computation consuming parts in the whole video codec. About 60% of the computation is spent on this parts for the encoder. Several architectures were proposed for efficient processing of block matching algorithms. In this paper, in order to meet the requirements of 11.263 and the expectation of low power consumption, a modified sandwich architecture in21 is proposed. Based on the parallel processing philosophy, low power is expected and the generation of either one motion vector or four motion vectors with half-pixel accuracy is achieved concurrently. In addition, we will present our solution how to solve the other addition modes in 11.263 with the proposed architecture.
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models are stationary, e.g., the covariance relationships are time-invariant. In such cases, information gain is independent of previously collected data, and the optimal solution can always be computed in advance. Information-optimal sampling of a stationary GP time series thus reduces to even spacing, and such models are not appropriate for tracking localized anomalies. Additionally, GP model inference can be computationally expensive.
Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng
2017-01-01
Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications. PMID:28961262
Energy efficiency analysis and optimization for mobile platforms
NASA Astrophysics Data System (ADS)
Metri, Grace Camille
The introduction of mobile devices changed the landscape of computing. Gradually, these devices are replacing traditional personal computer (PCs) to become the devices of choice for entertainment, connectivity, and productivity. There are currently at least 45.5 million people in the United States who own a mobile device, and that number is expected to increase to 1.5 billion by 2015. Users of mobile devices expect and mandate that their mobile devices have maximized performance while consuming minimal possible power. However, due to the battery size constraints, the amount of energy stored in these devices is limited and is only growing by 5% annually. As a result, we focused in this dissertation on energy efficiency analysis and optimization for mobile platforms. We specifically developed SoftPowerMon, a tool that can power profile Android platforms in order to expose the power consumption behavior of the CPU. We also performed an extensive set of case studies in order to determine energy inefficiencies of mobile applications. Through our case studies, we were able to propose optimization techniques in order to increase the energy efficiency of mobile devices and proposed guidelines for energy-efficient application development. In addition, we developed BatteryExtender, an adaptive user-guided tool for power management of mobile devices. The tool enables users to extend battery life on demand for a specific duration until a particular task is completed. Moreover, we examined the power consumption of System-on-Chips (SoCs) and observed the impact on the energy efficiency in the event of offloading tasks from the CPU to the specialized custom engines. Based on our case studies, we were able to demonstrate that current software-based power profiling techniques for SoCs can have an error rate close to 12%, which needs to be addressed in order to be able to optimize the energy consumption of the SoC. Finally, we summarize our contributions and outline possible direction for future research in this field.
The Space Technology 5 Avionics System
NASA Technical Reports Server (NTRS)
Speer, Dave; Jackson, George; Stewart, Karen; Hernandez-Pellerano, Amri
2004-01-01
The Space Technology 5 (ST5) mission is a NASA New Millennium Program project that will validate new technologies for future space science missions and demonstrate the feasibility of building launching and operating multiple, miniature spacecraft that can collect research-quality in-situ science measurements. The three satellites in the ST5 constellation will be launched into a sun-synchronous Earth orbit in early 2006. ST5 fits into the 25-kilogram and 24-watt class of very small but fully capable spacecraft. The new technologies and design concepts for a compact power and command and data handling (C&DH) avionics system are presented. The 2-card ST5 avionics design incorporates new technology components while being tightly constrained in mass, power and volume. In order to hold down the mass and volume, and quali& new technologies for fUture use in space, high efficiency triple-junction solar cells and a lithium-ion battery were baselined into the power system design. The flight computer is co-located with the power system electronics in an integral spacecraft structural enclosure called the card cage assembly. The flight computer has a full set of uplink, downlink and solid-state recording capabilities, and it implements a new CMOS Ultra-Low Power Radiation Tolerant logic technology. There were a number of challenges imposed by the ST5 mission. Specifically, designing a micro-sat class spacecraft demanded that minimizing mass, volume and power dissipation would drive the overall design. The result is a very streamlined approach, while striving to maintain a high level of capability, The mission's radiation requirements, along with the low voltage DC power distribution, limited the selection of analog parts that can operate within these constraints. The challenge of qualifying new technology components for the space environment within a short development schedule was another hurdle. The mission requirements also demanded magnetic cleanliness in order to reduce the effect of stray (spacecraft-generated) magnetic fields on the science-grade magnetometer.
Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debenedictis, Erik P.
2015-02-01
We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integratesmore » processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.
A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less
On the production of N2O from the reaction of O/1D/with N2.
NASA Technical Reports Server (NTRS)
Simonaitis, R.; Lissi, E.; Heicklen, J.
1972-01-01
Ozone was photolyzed at 2537 A and at 25 C in the presence of 42-115 torr of O2 and about 880 torr of N2 to test the relative importance of the two reactions O(1D) + N2 + M leading to N2O + M and O(1D) + N2 leading to O(3P) + N2. In this study N2O was not found as a product. Thus from our detectability limit for N2O an upper limit to the efficiency of the first reaction relative to the second of 2.5 times 10 to the -6 power at 1000-torr total pressure was computed.