DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, U.A.; Baumle, B.; Kohler, P.
1992-10-01
Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.
ERIC Educational Resources Information Center
Udoh, Emmanuel E.
2010-01-01
Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…
Global computing for bioinformatics.
Loewe, Laurence
2002-12-01
Global computing, the collaboration of idle PCs via the Internet in a SETI@home style, emerges as a new way of massive parallel multiprocessing with potentially enormous CPU power. Its relations to the broader, fast-moving field of Grid computing are discussed without attempting a review of the latter. This review (i) includes a short table of milestones in global computing history, (ii) lists opportunities global computing offers for bioinformatics, (iii) describes the structure of problems well suited for such an approach, (iv) analyses the anatomy of successful projects and (v) points to existing software frameworks. Finally, an evaluation of the various costs shows that global computing indeed has merit, if the problem to be solved is already coded appropriately and a suitable global computing framework can be found. Then, either significant amounts of computing power can be recruited from the general public, or--if employed in an enterprise-wide Intranet for security reasons--idle desktop PCs can substitute for an expensive dedicated cluster.
A.J. Tepley; E.A. Thomann
2012-01-01
Recent increases in computation power have prompted enormous growth in the use of simulation models in ecological research. These models are valued for their ability to account for much of the ecological complexity found in field studies, but this ability usually comes at the cost of losing transparency into how the models work. In order to foster greater understanding...
NASA Technical Reports Server (NTRS)
Rubbert, P. E.
1978-01-01
The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.
Feurzeig, Wallace
1984-01-01
The first expert instructional system, the Socratic System, was developed in 1964. One of the earliest applications of this system was in the area of differential diagnosis in clinical medicine. The power of the underlying instructional paradigm was demonstrated and the potential of the approach for valuably supplementing medical instruction was recognized. Twenty years later, despite further educationally significant advances in expert systems technology and enormous reductions in the cost of computers, expert instructional methods have found very little application in medical schools.
The information science of microbial ecology.
Hahn, Aria S; Konwar, Kishori M; Louca, Stilianos; Hanson, Niels W; Hallam, Steven J
2016-06-01
A revolution is unfolding in microbial ecology where petabytes of 'multi-omics' data are produced using next generation sequencing and mass spectrometry platforms. This cornucopia of biological information has enormous potential to reveal the hidden metabolic powers of microbial communities in natural and engineered ecosystems. However, to realize this potential, the development of new technologies and interpretative frameworks grounded in ecological design principles are needed to overcome computational and analytical bottlenecks. Here we explore the relationship between microbial ecology and information science in the era of cloud-based computation. We consider microorganisms as individual information processing units implementing a distributed metabolic algorithm and describe developments in ecoinformatics and ubiquitous computing with the potential to eliminate bottlenecks and empower knowledge creation and translation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang
The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less
The computational challenges of Earth-system science.
O'Neill, Alan; Steenman-Clark, Lois
2002-06-15
The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.
Wang, Xiao-Jing; Krystal, John H.
2014-01-01
Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941
THE EINSTEIN-HOME SEARCH FOR RADIO PULSARS AND PSR J2007+2722 DISCOVERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, B.; Knispel, B.; Aulbert, C.
Einstein-Home aggregates the computer power of hundreds of thousands of volunteers from 193 countries, to search for new neutron stars using data from electromagnetic and gravitational-wave detectors. This paper presents a detailed description of the search for new radio pulsars using Pulsar ALFA survey data from the Arecibo Observatory. The enormous computing power allows this search to cover a new region of parameter space; it can detect pulsars in binary systems with orbital periods as short as 11 minutes. We also describe the first Einstein-Home discovery, the 40.8 Hz isolated pulsar PSR J2007+2722, and provide a full timing model. PSRmore » J2007+2722's pulse profile is remarkably wide with emission over almost the entire spin period. This neutron star is most likely a disrupted recycled pulsar, about as old as its characteristic spin-down age of 404 Myr. However, there is a small chance that it was born recently, with a low magnetic field. If so, upper limits on the X-ray flux suggest but cannot prove that PSR J2007+2722 is at least {approx}100 kyr old. In the future, we expect that the massive computing power provided by volunteers should enable many additional radio pulsar discoveries.« less
FPGA-based coprocessor for matrix algorithms implementation
NASA Astrophysics Data System (ADS)
Amira, Abbes; Bensaali, Faycal
2003-03-01
Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.
FPGA implementation of ICA algorithm for blind signal separation and adaptive noise canceling.
Kim, Chang-Min; Park, Hyung-Min; Kim, Taesu; Choi, Yoon-Kyung; Lee, Soo-Young
2003-01-01
An field programmable gate array (FPGA) implementation of independent component analysis (ICA) algorithm is reported for blind signal separation (BSS) and adaptive noise canceling (ANC) in real time. In order to provide enormous computing power for ICA-based algorithms with multipath reverberation, a special digital processor is designed and implemented in FPGA. The chip design fully utilizes modular concept and several chips may be put together for complex applications with a large number of noise sources. Experimental results with a fabricated test board are reported for ANC only, BSS only, and simultaneous ANC/BSS, which demonstrates successful speech enhancement in real environments in real time.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Acquisition of ICU data: concepts and demands.
Imhoff, M
1992-12-01
As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)
Computational problems and signal processing in SETI
NASA Technical Reports Server (NTRS)
Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard
1991-01-01
The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.
Sittig, D. F.; Orr, J. A.
1991-01-01
Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm. PMID:1807607
Monte Carlo simulation of biomolecular systems with BIOMCSIM
NASA Astrophysics Data System (ADS)
Kamberaj, H.; Helms, V.
2001-12-01
A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.
Ultra-Compact Transputer-Based Controller for High-Level, Multi-Axis Coordination
NASA Technical Reports Server (NTRS)
Zenowich, Brian; Crowell, Adam; Townsend, William T.
2013-01-01
The design of machines that rely on arrays of servomotors such as robotic arms, orbital platforms, and combinations of both, imposes a heavy computational burden to coordinate their actions to perform coherent tasks. For example, the robotic equivalent of a person tracing a straight line in space requires enormously complex kinematics calculations, and complexity increases with the number of servo nodes. A new high-level architecture for coordinated servo-machine control enables a practical, distributed transputer alternative to conventional central processor electronics. The solution is inherently scalable, dramatically reduces bulkiness and number of conductor runs throughout the machine, requires only a fraction of the power, and is designed for cooling in a vacuum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houston, Johnny L; Geter, Kerry
This Project?s third year of implementation in 2007-2008, the final year, as designated by Elizabeth City State University (ECSU), in cooperation with the National Association of Mathematicians (NAM) Inc., in an effort to promote research and research training programs in computational science ? scientific visualization (CSSV). A major goal of the Project was to attract the energetic and productive faculty, graduate and upper division undergraduate students of diverse ethnicities to a program that investigates science and computational science issues of long-term interest to the Department of Energy (DoE) and the nation. The breadth and depth of computational science?scientific visualization andmore » the magnitude of resources available are enormous for permitting a variety of research activities. ECSU?s Computational Science-Science Visualization Center will serve as a conduit for directing users to these enormous resources.« less
ERIC Educational Resources Information Center
Oblinger, Diana
The Internet is an international network linking hundreds of smaller computer networks in North America, Europe, and Asia. Using the Internet, computer users can connect to a variety of computers with little effort or expense. The potential for use by college faculty is enormous. The largest problem faced by most users is understanding what such…
Computation Directorate 2008 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2009-03-25
Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less
Zimmerman, M I; Bowman, G R
2016-01-01
Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.
One loop back reaction on power law inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramo, L.R.; Woodard, R.P.
1999-08-01
We consider quantum-mechanical corrections to a homogeneous, isotropic, and spatially flat geometry whose scale factor expands classically as a general power of the comoving time. The effects of both gravitons and the scalar inflaton are computed at one loop using the manifestly causal formalism of Schwinger [J. Math. Phys. {bold 2}, 407 (1961); {ital Particles, Sources and Fields} (Addison, Wesley, Reading, MA, 1970)] with the Feynman rules recently developed by Iliopoulos {ital et al.} [Nucl. Phys. B {bold 534}, 419 (1998)]. We find no significant effect, in marked contrast to the result obtained by Mukhanov and co-workers [Phys. Rev. Lett.more » {bold 78}, 1624 (1998); Phys. Rev. D {bold 56}, 3248 (1997)] for chaotic inflation based on a quadratic potential. By applying the canonical technique of Mukhanov and co-workers to the exponential potentials of power law inflation, we show that the two methods produce the same results, within the approximations employed, for these backgrounds. We therefore conclude that the shape of the inflaton potential can have an enormous impact on the one loop back reaction. {copyright} {ital 1999} {ital The American Physical Society}« less
Dynamic stresses in a Francis model turbine at deep part load
NASA Astrophysics Data System (ADS)
Weber, Wilhelm; von Locquenghien, Florian; Conrad, Philipp; Koutnik, Jiri
2017-04-01
A comparison between numerically obtained dynamic stresses in a Francis model turbine at deep part load with experimental ones is presented. Due to the change in the electrical power mix to more content of new renewable energy sources, Francis turbines are forced to operate at deep part load in order to compensate stochastic nature of wind and solar power and to ensure grid stability. For the extension of the operating range towards deep part load improved understanding of the harsh flow conditions and their impact on material fatigue of hydraulic components is required in order to ensure long life time of the power unit. In this paper pressure loads on a model turbine runner from unsteady two-phase computational fluid dynamics simulation at deep part load are used for calculation of mechanical stresses by finite element analysis. Therewith, stress distribution over time is determined. Since only few runner rotations are simulated due to enormous numerical cost, more effort has to be spent to evaluation procedure in order to obtain objective results. By comparing the numerical results with measured strains accuracy of the whole simulation procedure is verified.
DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure
NASA Astrophysics Data System (ADS)
Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.
2010-03-01
Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.
NASA Astrophysics Data System (ADS)
Gruska, Jozef
2012-06-01
One of the most basic tasks in quantum information processing, communication and security (QIPCC) research, theoretically deep and practically important, is to find bounds on how really important are inherently quantum resources for speeding up computations. This area of research is bringing a variety of results that imply, often in a very unexpected and counter-intuitive way, that: (a) surprisingly large classes of quantum circuits and algorithms can be efficiently simulated on classical computers; (b) the border line between quantum processes that can and cannot be efficiently simulated on classical computers is often surprisingly thin; (c) the addition of a seemingly very simple resource or a tool often enormously increases the power of available quantum tools. These discoveries have put also a new light on our understanding of quantum phenomena and quantum physics and on the potential of its inherently quantum and often mysteriously looking phenomena. The paper motivates and surveys research and its outcomes in the area of de-quantisation, especially presents various approaches and their outcomes concerning efficient classical simulations of various families of quantum circuits and algorithms. To motivate this area of research some outcomes in the area of de-randomization of classical randomized computations.
"A Fair Go for All?" Australia's Language-in-Migration Policy
ERIC Educational Resources Information Center
Hoang, Ngoc T. H.; Hamid, M. Obaidul
2017-01-01
As the power of tests lies in their uses, language tests that are used to assess immigration eligibility exercise enormous power. Critical Language Testing calls for exposing the power of tests by examining the intentions of introducing tests and their effects on individuals and society, especially from the perspective of test-takers. This case…
Principles of Tablet Computing for Educators
ERIC Educational Resources Information Center
Katzan, Harry, Jr.
2015-01-01
In the study of modern technology for the 21st century, one of the most popular subjects is tablet computing. Tablet computers are now used in business, government, education, and the personal lives of practically everyone--at least, it seems that way. As of October 2013, Apple has sold 170 million iPads. The success of tablets is enormous and has…
Not an Oxymoron: Some X-ray Binary Pulsars with Enormous Spinup Rates Reveal Weak Magnetic Fields
NASA Astrophysics Data System (ADS)
Christodoulou, D. M.; Laycock, S. G. T.; Kazanas, D.
2018-05-01
Three high-mass X-ray binaries have been discovered recently exhibiting enormous spinup rates. Conventional accretion theory predicts extremely high surface dipolar magnetic fields that we believe are unphysical. Instead, we propose quite the opposite scenario: some of these pulsars exhibit weak magnetic fields, so much so that their magnetospheres are crushed by the weight of inflowing matter. The enormous spinup rate is achieved before inflowing matter reaches the pulsar's surface as the penetrating inner disk transfers its excess angular momentum to the receding magnetosphere which, in turn, applies a powerful spinup torque to the pulsar. This mechanism also works in reverse: it spins a pulsar down when the magnetosphere expands beyond corotation and finds itself rotating faster than the accretion disk which then exerts a powerful retarding torque to the magnetic field and to the pulsar itself. The above scenaria cannot be accommodated within the context of neutron-star accretion processes occurring near spin equilibrium, thus they constitute a step toward a new theory of extreme (far from equilibrium) accretion phenomena.
The study on servo-control system in the large aperture telescope
NASA Astrophysics Data System (ADS)
Hu, Wei; Zhenchao, Zhang; Daxing, Wang
2008-08-01
Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.
Superconducting Optoelectronic Circuits for Neuromorphic Computing
NASA Astrophysics Data System (ADS)
Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo
2017-03-01
Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.
Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P
2006-08-01
Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.
Advertising to Women: Who Are We in Print and How Do We Reclaim Our Image?
ERIC Educational Resources Information Center
Levy, Jane C.
2007-01-01
In spite of the enormous power that women wield in the marketplace, the role of women in advertisements is not commensurate with women as a powerful group. Studies demonstrate a gender bias in advertising, such that, despite their buying power, women are portrayed in stereotypical roles. This manuscript discusses ways that women are depicted in…
THE POTENTIAL MID-TERM ROLE OF NUCLEAR POWER IN THE UNITED STATES: A SCENARIO ANALYSIS USING MARKAL
With all nations facing enormous challenges related to energy security, sustainability and environmental quality, nuclear power is likely to play an increasingly important role in the future. In particular, the life-cycle emissions of criteria pollutants and greenhouse gases (GHG...
Status and Trend of Automotive Power Packaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Zhenxian
2012-01-01
Comprehensive requirements in aspects of cost, reliability, efficiency, form factor, weight, and volume for power electronics modules in modern electric drive vehicles have driven the development of automotive power packaging technology intensively. Innovation in materials, interconnections, and processing techniques is leading to enormous improvements in power modules. In this paper, the technical development of and trends in power module packaging are evaluated by examining technical details with examples of industrial products. The issues and development directions for future automotive power module packaging are also discussed.
Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination
NASA Astrophysics Data System (ADS)
Abbas, Fayçal; Babahenini, Mohamed Chaouki
2018-06-01
Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Customization of Discriminant Function Analysis for Prediction of Solar Flares
2005-03-01
lives such as telecommunication, commercial airlines, electrical power , wireless services, and terrestrial weather tracking and forecasting...the 1800’s can wreak havoc on today’s power , fuel, and telecommunication lines and finds its origin in solar activity. Enormous amounts of solar...inducing potential differences across large areas of the surface. Earth-bound power , fuel, and telecommunication lines grounded to the Earth provide an
Word Perfect: Literacy in the Computer Age.
ERIC Educational Resources Information Center
Tuman, Myron C.
Elaborating on Emile Durkheim's claim that major debates about pedagogy are always an indicator of underlying social change, this book charts the enormous impact computers are having on how people read and write, how reading and writing are taught, and how literacy is defined. The larger concern of the book is how technology generally affects not…
ERIC Educational Resources Information Center
Bergren, Martha Dewey
2004-01-01
School nurses access an enormous amount of information through the Internet. Although most avid computer users are savvy to the threat of viruses to the integrity of data, many who surf the Web do not know that their data and the functioning of their computer is at risk to another hidden threat--spyware. This article will describe spyware, why it…
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich; Stahl, Stephanie
2008-01-01
Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30x30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation…
Chien, T W; Chu, H; Hsu, W C; Tseng, T K; Hsu, C H; Chen, K Y
2003-08-01
The continuous emission monitoring system (CEMS) can monitor flue gas emissions continuously and instantaneously. However, it has the disadvantages of enormous cost, easily producing errors in sampling periods of bad weather, lagging response in variable ambient environments, and missing data in daily zero and span tests and maintenance. The concept of a predictive emission monitoring system (PEMS) is to use the operating parameters of combustion equipment through thermodynamic or statistical methods to construct a mathematic model that can predict emissions by a computer program. The goal of this study is to set up a PEMS in a gas-fired combined cycle power generation unit at the Hsinta station of Taiwan Power Co. The emissions to be monitored include nitrogen oxides (NOx) and oxygen (O2) in flue gas. The major variables of the predictive model were determined based on the combustion theory. The data of these variables then were analyzed to establish a regression model. From the regression results, the influences of these variables are discussed and the predicted values are compared with the CEMS data for accuracy. In addition, according to the cost information, the capital and operation and maintenance costs for a PEMS can be much lower than those for a CEMS.
Holographic imaging and photostimulation of neural activity.
Yang, Weijian; Yuste, Rafael
2018-06-01
Optical imaging methods are powerful tools in neuroscience as they can systematically monitor the activity of neuronal populations with high spatiotemporal resolution using calcium or voltage indicators. Moreover, caged compounds and optogenetic actuators enable to optically manipulate neural activity. Among optical methods, computer-generated holography offers an enormous flexibility to sculpt the excitation light in three-dimensions (3D), particularly when combined with two-photon light sources. By projecting holographic light patterns on the sample, the activity of multiple neurons across a 3D brain volume can be simultaneously imaged or optically manipulated with single-cell precision. This flexibility makes two-photon holographic microscopy an ideal all-optical platform to simultaneously read and write activity in neuronal populations in vivo in 3D, a critical ability to dissect the function of neural circuits. Copyright © 2018 Elsevier Ltd. All rights reserved.
Machine Phase Fullerene Nanotechnology: 1996
NASA Technical Reports Server (NTRS)
Globus, Al; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
NASA has used exotic materials for spacecraft and experimental aircraft to good effect for many decades. In spite of many advances, transportation to space still costs about $10,000 per pound. Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. These studies and others suggest enormous potential for aerospace systems. Unfortunately, methods to realize diamonoid nanotechnology are at best highly speculative. Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically relatively accessible and of great aerospace interest. Machine phase materials are (hypothetical) materials consisting entirely or in large part of microscopic machines. In a sense, most living matter fits this definition. To begin investigation of fullerene nanotechnology, we used molecular dynamics to study the properties of carbon nanotube based gears and gear/shaft configurations. Experiments on C60 and quantum calculations suggest that benzyne may react with carbon nanotubes to form gear teeth. Han has computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Results suggest that rotation can be converted to rotating or linear motion, and linear motion may be converted into rotation. Preliminary results suggest that these mechanical systems can be cooled by a helium atmosphere. Furthermore, Deepak has successfully simulated using helical electric fields generated by a laser to power fullerene gears once a positive and negative charge have been added to form a dipole. Even with mechanical motion, cooling, and power; creating a viable nanotechnology requires support structures, computer control, a system architecture, a variety of components, and some approach to manufacture. Additional information is contained within the original extended abstract.
Enormous knowledge base of disease diagnosis criteria.
Xiao, Z H; Xiao, Y H; Pei, J H
1995-01-01
One of the problems in the development of the medical knowledge systems is the limitations of the system's knowledge. It is a common expectation to increase the number of diseases contained in a system. Using a high density knowledge representation method designed by us, we have developed the Enormous Knowledge Base of Disease Diagnosis Criteria (EKBDDC). It contains diagnostic criteria of 1,001 diagnostic entities and describes nearly 4,000 items of diagnostic indicators. It is the core of a huge medical project--the Electronic-Brain Medical Erudite (EBME). This enormous knowledge base was implemented initially on a low-cost popular microcomputer, which can aid in the prompting of typical disease and in teaching of diagnosis. The knowledge base is easy to expand. One of the main goals of EKBDDC is to increase the number of diseases included in it as far as possible using a low-cost computer with a comparatively small storage capacity. For this, we have designed a high density knowledge representation method. Criteria of various diagnostic entities are respectively stored in different records of the knowledge base. Each diagnostic entity corresponds to a diagnostic criterion data set; each data set consists of some diagnostic criterion data values (Table 1); each data is composed of two parts: integer and decimal; the integral part is the coding number of the given diagnostic information, and the decimal part is the diagnostic value of this information to the disease indicated by corresponding record number. For example, 75.02: the integer 75 is the coding number of "hemorrhagic skin rash"; the decimal 0.02 is the diagnostic value of this manifestation for diagnosing allergic purpura. TABULAR DATA, SEE PUBLISHED ABSTRACT. The algebraic sum method, a special form of the weighted summation, is adopted as mathematical model. In EKBDDC, the diagnostic values, which represent the significance of the disease manifestations for diagnosing corresponding diseases, were determined empirically. It is of a great economical, practical, and technical significance to realize enormous knowledge bases of disease diagnosis criteria on a low-cost popular microcomputer. This is beneficial for the developing countries to popularize medical informatics. To create the enormous international computer-aided diagnosis system, one may jointly develop the unified modules of disease diagnosis criteria used to "inlay" relevant computer-aided diagnosis systems. It is just like assembling a house using prefabricated panels.
NASA Astrophysics Data System (ADS)
Sporea, R. A.; Trainor, M. J.; Young, N. D.; Shannon, J. M.; Silva, S. R. P.
2014-03-01
Ultra-large-scale integrated (ULSI) circuits have benefited from successive refinements in device architecture for enormous improvements in speed, power efficiency and areal density. In large-area electronics (LAE), however, the basic building-block, the thin-film field-effect transistor (TFT) has largely remained static. Now, a device concept with fundamentally different operation, the source-gated transistor (SGT) opens the possibility of unprecedented functionality in future low-cost LAE. With its simple structure and operational characteristics of low saturation voltage, stability under electrical stress and large intrinsic gain, the SGT is ideally suited for LAE analog applications. Here, we show using measurements on polysilicon devices that these characteristics lead to substantial improvements in gain, noise margin, power-delay product and overall circuit robustness in digital SGT-based designs. These findings have far-reaching consequences, as LAE will form the technological basis for a variety of future developments in the biomedical, civil engineering, remote sensing, artificial skin areas, as well as wearable and ubiquitous computing, or lightweight applications for space exploration.
Sporea, R. A.; Trainor, M. J.; Young, N. D.; Shannon, J. M.; Silva, S. R. P.
2014-01-01
Ultra-large-scale integrated (ULSI) circuits have benefited from successive refinements in device architecture for enormous improvements in speed, power efficiency and areal density. In large-area electronics (LAE), however, the basic building-block, the thin-film field-effect transistor (TFT) has largely remained static. Now, a device concept with fundamentally different operation, the source-gated transistor (SGT) opens the possibility of unprecedented functionality in future low-cost LAE. With its simple structure and operational characteristics of low saturation voltage, stability under electrical stress and large intrinsic gain, the SGT is ideally suited for LAE analog applications. Here, we show using measurements on polysilicon devices that these characteristics lead to substantial improvements in gain, noise margin, power-delay product and overall circuit robustness in digital SGT-based designs. These findings have far-reaching consequences, as LAE will form the technological basis for a variety of future developments in the biomedical, civil engineering, remote sensing, artificial skin areas, as well as wearable and ubiquitous computing, or lightweight applications for space exploration. PMID:24599023
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
NASA Astrophysics Data System (ADS)
Barreiro Megino, Fernando; Caballero Bejar, Jose; De, Kaushik; Hover, John; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Petrosyan, Artem; Wenaus, Torre
2016-02-01
After a scheduled maintenance and upgrade period, the world's largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.
Bergren, Martha Dewey
2004-10-01
School nurses access an enormous amount of information through the Internet. Although most avid computer users are savvy to the threat of viruses to the integrity of data, many who surf the Web do not know that their data and the functioning of their computer is at risk to another hidden threat--spyware. This article will describe spyware, why it is a problem, how it is transmitted to a personal or business computer, how to prevent spyware infestation, and how to delete it.
Lincoln, Don
2018-01-16
The LHC is the worldâs highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilabâs Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.
The origins of computer weather prediction and climate modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Peter
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less
The origins of computer weather prediction and climate modeling
NASA Astrophysics Data System (ADS)
Lynch, Peter
2008-03-01
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
Biomimetics: using nature as an inspiring model for human innovation
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph
2006-01-01
The evolution of nature over 3.8 billion years led to the highly effective and power efficient biological mechanisms. Imitating these mechanisms offers enormous potentials for the improvement of our life and the tools we use.
Progress in protein crystallography.
Dauter, Zbigniew; Wlodawer, Alexander
2016-01-01
Macromolecular crystallography evolved enormously from the pioneering days, when structures were solved by "wizards" performing all complicated procedures almost by hand. In the current situation crystal structures of large systems can be often solved very effectively by various powerful automatic programs in days or hours, or even minutes. Such progress is to a large extent coupled to the advances in many other fields, such as genetic engineering, computer technology, availability of synchrotron beam lines and many other techniques, creating the highly interdisciplinary science of macromolecular crystallography. Due to this unprecedented success crystallography is often treated as one of the analytical methods and practiced by researchers interested in structures of macromolecules, but not highly competent in the procedures involved in the process of structure determination. One should therefore take into account that the contemporary, highly automatic systems can produce results almost without human intervention, but the resulting structures must be carefully checked and validated before their release into the public domain.
Computer Security: Improvements Needed to Reduce Risk to Critical Federal Operations and Assets
2001-11-09
COMPUTER SECURITY Improvements Needed to Reduce Risk to Critical Federal Operations and Assets Statement of Robert F. Dacey Director, Information...Improvements Needed to Reduce Risk to Critical Federal Operations and Assets Contract Number Grant Number Program Element Number Author(s...The benefits have been enormous. Vast amounts of information are now literally at our fingertips, facilitating research on virtually every topic
NASA Astrophysics Data System (ADS)
Yokomizu, Yasunobu
Dispersed generation systems, such as micro gas-turbines and fuel cells, have been installed on some of commercial facilities. Smaller dispersed generators like solar photovoltaics have been also located on the several of individual homes. The trends in the introduction of the these generation systems seem to continue in the future and to cause the power system to have the enormous number of the dispersed generation systems. The present report discusses the near-future power distribution systems.
Certification of computer professionals: A good idea?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggess, G.
1994-12-31
In the early stages of computing there was little understanding or attention paid to the ethical responsibilities of professionals. Compainies routinely put secretaries and music majors through 30 hours of video training and turned them loose on data processing projects. As the nature of the computing task changed, these same practices were followed and the trainees were set loose on life-critical software development projects. The enormous risks of using programmers with limited training has been by the GAO report on the BSY-2 program.
A Fast Algorithm for Massively Parallel, Long-Term, Simulation of Complex Molecular Dynamics Systems
NASA Technical Reports Server (NTRS)
Jaramillo-Botero, Andres; Goddard, William A, III; Fijany, Amir
1997-01-01
The advances in theory and computing technology over the last decade have led to enormous progress in applying atomistic molecular dynamics (MD) methods to the characterization, prediction, and design of chemical, biological, and material systems,.
Hadoop-MCC: Efficient Multiple Compound Comparison Algorithm Using Hadoop.
Hua, Guan-Jie; Hung, Che-Lun; Tang, Chuan Yi
2018-01-01
In the past decade, the drug design technologies have been improved enormously. The computer-aided drug design (CADD) has played an important role in analysis and prediction in drug development, which makes the procedure more economical and efficient. However, computation with big data, such as ZINC containing more than 60 million compounds data and GDB-13 with more than 930 million small molecules, is a noticeable issue of time-consuming problem. Therefore, we propose a novel heterogeneous high performance computing method, named as Hadoop-MCC, integrating Hadoop and GPU, to copy with big chemical structure data efficiently. Hadoop-MCC gains the high availability and fault tolerance from Hadoop, as Hadoop is used to scatter input data to GPU devices and gather the results from GPU devices. Hadoop framework adopts mapper/reducer computation model. In the proposed method, mappers response for fetching SMILES data segments and perform LINGO method on GPU, then reducers collect all comparison results produced by mappers. Due to the high availability of Hadoop, all of LINGO computational jobs on mappers can be completed, even if some of the mappers encounter problems. A comparison of LINGO is performed on each the GPU device in parallel. According to the experimental results, the proposed method on multiple GPU devices can achieve better computational performance than the CUDA-MCC on a single GPU device. Hadoop-MCC is able to achieve scalability, high availability, and fault tolerance granted by Hadoop, and high performance as well by integrating computational power of both of Hadoop and GPU. It has been shown that using the heterogeneous architecture as Hadoop-MCC effectively can enhance better computational performance than on a single GPU device. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
CONFOCAL MICROSCOPY SYSTEM PERFORMANCE: SPECTROSCOPY AND FOUNDATIONS FOR QUANTITATION
The confocal laser-scanning microscope (CLSM) has enormous potential in many biological fields. The reliability of the CLSM to obtain specific measurements and quantify fluorescence data is dependent on using a correctly aligned machine that contains a stable laser power. For man...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenks, Jeromy WJ; TeGrotenhuis, Ward E.; Motkuri, Radha K.
2015-07-09
Metal-organic frameworks (MOFs) have recently attracted enormous interest over the past few years due to their potential applications in energy storage and gas separation. However, there have been few reports on MOFs for adsorption cooling applications. Adsorption cooling technology is an established alternative to mechanical vapor compression refrigeration systems. Adsorption cooling is an excellent alternative in industrial environments where waste heat is available. Applications also include hybrid systems, refrigeration, power-plant dry cooling, cryogenics, vehicular systems and building HVAC. Adsorption based cooling and refrigeration systems have several advantages including few moving parts and negligible power consumption. Key disadvantages include large thermalmore » mass, bulkiness, complex controls, and low COP (0.2-0.5). We explored the use of metal organic frameworks that have very high mass loading and relatively low heats of adsorption, with certain combinations of refrigerants to demonstrate a new type of highly efficient adsorption chiller. An adsorption chiller based on MOFs suggests that a thermally-driven COP>1 may be possible with these materials, which would represent a fundamental breakthrough in performance of adsorption chiller technology. Computational fluid dynamics combined with a system level lumped-parameter model have been used to project size and performance for chillers with a cooling capacity ranging from a few kW to several thousand kW. In addition, a cost model has been developed to project manufactured cost of entire systems. These systems rely on stacked micro/mini-scale architectures to enhance heat and mass transfer. Presented herein are computational and experimental results for hydrophyilic MOFs, fluorophilic MOFs and also flourophilic Covalent-organic frameworks (COFs).« less
NASA Astrophysics Data System (ADS)
Landsfeld, M. F.; Daudert, B.; Friedrichs, M.; Morton, C.; Hegewisch, K.; Husak, G. J.; Funk, C. C.; Peterson, P.; Huntington, J. L.; Abatzoglou, J. T.; Verdin, J. P.; Williams, E. L.
2015-12-01
The Famine Early Warning Systems Network (FEWS NET) focuses on food insecurity in developing nations and provides objective, evidence based analysis to help government decision-makers and relief agencies plan for and respond to humanitarian emergencies. The Google Earth Engine (GEE) is a platform provided by Google Inc. to support scientific research and analysis of environmental data in their cloud environment. The intent is to allow scientists and independent researchers to mine massive collections of environmental data and leverage Google's vast computational resources to detect changes and monitor the Earth's surface and climate. GEE hosts an enormous amount of satellite imagery and climate archives, one of which is the Climate Hazards Group Infrared Precipitation with Stations dataset (CHIRPS). The CHIRPS dataset is land based, quasi-global (latitude 50N-50S), 0.05 degree resolution, and has a relatively long term period of record (1981-present). CHIRPS is on a continuous monthly feed into the GEE as new data fields are generated each month. This precipitation dataset is a key input for FEWS NET monitoring and forecasting efforts. FEWS NET intends to leverage the GEE in order to provide analysts and scientists with flexible, interactive tools to aid in their monitoring and research efforts. These scientists often work in bandwidth limited regions, so lightweight Internet tools and services that bypass the need for downloading massive datasets to analyze them, are preferred for their work. The GEE provides just this type of service. We present a tool designed specifically for FEWS NET scientists to be utilized interactively for investigating and monitoring for agro-climatological issues. We are able to utilize the enormous GEE computing power to generate on-the-fly statistics to calculate precipitation anomalies, z-scores, percentiles and band ratios, and allow the user to interactively select custom areas for statistical time series comparisons and predictions.
Data Center Consolidation: A Step towards Infrastructure Clouds
NASA Astrophysics Data System (ADS)
Winter, Markus
Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.
Modeling of microstructure evolution in direct metal laser sintering: A phase field approach
NASA Astrophysics Data System (ADS)
Nandy, Jyotirmoy; Sarangi, Hrushikesh; Sahoo, Seshadev
2017-02-01
Direct Metal Laser Sintering (DMLS) is a new technology in the field of additive manufacturing, which builds metal parts in a layer by layer fashion directly from the powder bed. The process occurs within a very short time period with rapid solidification rate. Slight variations in the process parameters may cause enormous change in the final build parts. The physical and mechanical properties of the final build parts are dependent on the solidification rate which directly affects the microstructure of the material. Thus, the evolving of microstructure plays a vital role in the process parameters optimization. Nowadays, the increase in computational power allows for direct simulations of microstructures during materials processing for specific manufacturing conditions. In this study, modeling of microstructure evolution of Al-Si-10Mg powder in DMLS process was carried out by using a phase field approach. A MATLAB code was developed to solve the set of phase field equations, where simulation parameters include temperature gradient, laser scan speed and laser power. The effects of temperature gradient on microstructure evolution were studied and found that with increase in temperature gradient, the dendritic tip grows at a faster rate.
Gallium arsenide processing elements for motion estimation full-search algorithm
NASA Astrophysics Data System (ADS)
Lopez, Jose F.; Cortes, P.; Lopez, S.; Sarmiento, Roberto
2001-11-01
The Block-Matching motion estimation algorithm (BMA) is the most popular method for motion-compensated coding of image sequence. Among the several possible searching methods to compute this algorithm, the full-search BMA (FBMA) has obtained great interest from the scientific community due to its regularity, optimal solution and low control overhead which simplifies its VLSI realization. On the other hand, its main drawback is the demand of an enormous amount of computation. There are different ways of overcoming this factor, being the use of advanced technologies, such as Gallium Arsenide (GaAs), the one adopted in this article together with different techniques to reduce area overhead. By exploiting GaAs properties, improvements can be obtained in the implementation of feasible systems for real time video compression architectures. Different primitives used in the implementation of processing elements (PE) for a FBMA scheme are presented. As a result, Pes running at 270 MHz have been developed in order to study its functionality and performance. From these results, an implementation for MPEG applications is proposed, leading to an architecture running at 145 MHz with a power dissipation of 3.48 W and an area of 11.5 mm2.
NASA Astrophysics Data System (ADS)
Cataldo, Franca
The world is at the dawn of a third industrial revolution, the digital revolution, that brings great changes the world over. Today, computing devices, the Internet, and the World Wide Web are vital technology tools that affect every aspect of everyday life and success. While computing technologies offer enormous benefits, there are equally enormous safety and security risks that have been growing exponentially since they became widely available to the public in 1994. Cybercriminals are increasingly implementing sophisticated and serious hack attacks and breaches upon our nation's government, financial institutions, organizations, communities, and private citizens. There is a great need for computer scientists to carry America's innovation and economic growth forward and for cybersecurity professionals to keep our nation safe from criminal hacking. In this digital age, computer science and cybersecurity are essential foundational ingredients of technological innovation, economic growth, and cybersecurity that span all industries. Yet, America's K-12 education institutions are not teaching the computer science and cybersecurity skills required to produce a technologically-savvy 21st century workforce. Education is the key to preparing students to enter the workforce and, therefore, American K-12 STEM education must be reformed to accommodate the teachings required in the digital age. Keywords: Cybersecurity Education, Cybersecurity Education Initiatives, Computer Science Education, Computer Science Education Initiatives, 21 st Century K-12 STEM Education Reform, 21st Century Digital Literacies, High-Tech Innovative Problem-Solving Skills, 21st Century Digital Workforce, Standardized Testing, Foreign Language and Culture Studies, Utica College, Professor Chris Riddell.
Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns
NASA Astrophysics Data System (ADS)
Deiotte, R.; La Valley, R.
2017-10-01
The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.
Proceedings of the 1992 Antenna Applications Symposium
1993-06-01
consists of two optically fed lx4 subarrays with MMIC based active T/R modules. Custom designed fiber optic links have been employed to provide...factor 1/100,000. However, despite this enormous reduction in power , the peak amplitude is still more than 30 times larger than the rms noise. Based on...only passive components and receives inputs from a central high- power transmitter and also sends outputs to a low-noise receiver. Active ESAs, called
Solving optimization problems on computational grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S. J.; Mathematics and Computer Science
2001-05-01
Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software infrastructure need to solve these problems on computational grids. This article describes some of the results we have obtained during the first three years of the metaneos project. Our efforts have led to development of the runtime support library MW for implementing algorithms with master-worker control structure on Condor platforms. This work is discussed here, along with work on algorithms and codes for integer linear programming, the quadratic assignment problem, and stochastic linear programmming. Our experiences in the metaneos project have shown that cheap, powerful computational grids can be used to tackle large optimization problems of various types. In an industrial or commercial setting, the results demonstrate that one may not have to buy powerful computational servers to solve many of the large problems arising in areas such as scheduling, portfolio optimization, or logistics; the idle time on employee workstations (or, at worst, an investment in a modest cluster of PCs) may do the job. For the optimization research community, our results motivate further work on parallel, grid-enabled algorithms for solving very large problems of other types. The fact that very large problems can be solved cheaply allows researchers to better understand issues of 'practical' complexity and of the role of heuristics.« less
Defining Human-Centered System Issues for Verifying and Validating Air Traffic Control Systems
DOT National Transportation Integrated Search
1993-01-01
Over the past 40 years, the application of automation to the U.S. air traffic : control (ATC) system has grown enormously to meet significant increases in air : traffic volume. The next ten years will witness a dramatic overhaul of computer : hardwar...
Grove Medal Address - investing in the fuel cell business
NASA Astrophysics Data System (ADS)
Rasul, Firoz
Successful commercialization of fuel cells will require significant investment. To attract this funding, the objective must be commercially driven and the financing will have to be viewed as an investment in the business of fuel cells rather than just the funding of technology development. With the recent advancements in fuel cells and demonstrations of fuel cell power systems in stationary and transport applications, an industry has begun to emerge and it is attracting the attention of institutional and corporate investors, in addition to the traditional government funding. Although, the strategic importance of fuel cells as a versatile, efficient and cleaner power source of the future as well as an `engine' for economic growth and job creation has now been understood by several governments, major corporations have just begun to recognize the enormous potential of the fuel cell for it to become as ubiquitous for electrical power as the microprocessor has become for computing power. Viewed as a business, fuel cells must meet the commercial requirements of price competitiveness, productivity enhancement, performance and reliability, in addition to environmental friendliness. As fuel cell-based products exhibit commercial advantages over conventional power sources, the potential for higher profits and superior returns will attract the magnitude of investment needed to finance the development of products for the varied applications, the establishment of high volume manufacturing capabilities, and the creation of appropriate fuel and service infrastructures for these new products based on a revolutionary technology. Today, the fuel cell industry is well-positioned to offer the investing public opportunities to reap substantial returns through their participation at this early stage of growth of the industry.
Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing
NASA Astrophysics Data System (ADS)
Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.
2014-12-01
After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.
Overview on the high power excimer laser technology
NASA Astrophysics Data System (ADS)
Liu, Jingru
2013-05-01
High power excimer laser has essential applications in the fields of high energy density physics, inertial fusion energy and industry owing to its advantages such as short wavelength, high gain, wide bandwidth, energy scalable and repetition operating ability. This overview is aimed at an introduction and evaluation of enormous endeavor of the international high power excimer laser community in the last 30 years. The main technologies of high power excimer laser are reviewed, which include the pumping source technology, angular multiplexing and pulse compressing, beam-smoothing and homogenous irradiation, high efficiency and repetitive operation et al. A high power XeCl laser system developed in NINT of China is described in detail.
A New Deal for Higher Education?
ERIC Educational Resources Information Center
Roach, Ronald
2009-01-01
Almost a year after assuming leadership, the administration of Barack Obama has taken on enormous tasks in confronting the recession that has gripped the U.S. economy. Americans have watched tentatively as the president has extended federal powers into handling corporate bailouts, overseeing the rehabilitation of banks and General Motors, laying…
Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications
ERIC Educational Resources Information Center
Jung, Gueyoung
2010-01-01
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…
In the Face of Fallible AWE Feedback: How Do Students Respond?
ERIC Educational Resources Information Center
Bai, Lifang; Hu, Guangwei
2017-01-01
Automated writing evaluation (AWE) systems can provide immediate computer-generated quantitative assessments and qualitative diagnostic feedback on an enormous number of submitted essays. However, limited research attention has been paid to locally designed AWE systems used in English as a foreign language (EFL) classroom contexts. This study…
Network-Centric Data Mining for Medical Applications
ERIC Educational Resources Information Center
Davis, Darcy A.
2012-01-01
Faced with unsustainable costs and enormous amounts of under-utilized data, health care needs more efficient practices, research, and tools to harness the benefits of data. These methods create a feedback loop where computational tools guide and facilitate research, leading to improved biological knowledge and clinical standards, which will in…
The National Center for Computational Toxicology (NCCT) at the US Environmental Protection Agency has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences. This includes high-throughput in vitro screening data, legacy in vivo...
The National Center for Computational Toxicology (NCCT) has assembled and delivered an enormous quantity and diversity of data for the environmental sciences through the CompTox Chemistry Dashboard. These data include high-throughput in vitro screening data, in vivo and functiona...
EPA RE-Powering Mapper Feasibility Studies
The U.S. Environmental Protection Agency (EPA) Office of Land and Emergency Management (OLEM) Office of Communications, Partnerships and Analysis (OCPA) initiated the RE-Powering America's Land Initiative to demonstrate the enormous potential that contaminated lands, landfills, and mine sites provide for developing renewable energy in the United States. As part of the RE-Powering America's Land Initiative, the EPA and the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) evaluated the feasibility of developing renewable energy production on Superfund, brownfields, and former landfill or mining sites. These reports pair EPA's expertise on contaminated sites with the renewable energy expertise of NREL.
MPEG-4-based 2D facial animation for mobile devices
NASA Astrophysics Data System (ADS)
Riegel, Thomas B.
2005-03-01
The enormous spread of mobile computing devices (e.g. PDA, cellular phone, palmtop, etc.) emphasizes scalable applications, since users like to run their favorite programs on the terminal they operate at that moment. Therefore appliances are of interest, which can be adapted to the hardware realities without loosing a lot of their functionalities. A good example for this is "Facial Animation," which offers an interesting way to achieve such "scalability." By employing MPEG-4, which provides an own profile for facial animation, a solution for low power terminals including mobile phones is demonstrated. From the generic 3D MPEG-4 face a specific 2D head model is derived, which consists primarily of a portrait image superposed by a suited warping mesh and adapted 2D animation rules. Thus the animation process of MPEG-4 need not be changed and standard compliant facial animation parameters can be used to displace the vertices of the mesh and warp the underlying image accordingly.
Computational Psychiatry and the Challenge of Schizophrenia.
Krystal, John H; Murray, John D; Chekroud, Adam M; Corlett, Philip R; Yang, Genevieve; Wang, Xiao-Jing; Anticevic, Alan
2017-05-01
Schizophrenia research is plagued by enormous challenges in integrating and analyzing large datasets and difficulties developing formal theories related to the etiology, pathophysiology, and treatment of this disorder. Computational psychiatry provides a path to enhance analyses of these large and complex datasets and to promote the development and refinement of formal models for features of this disorder. This presentation introduces the reader to the notion of computational psychiatry and describes discovery-oriented and theory-driven applications to schizophrenia involving machine learning, reinforcement learning theory, and biophysically-informed neural circuit models. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center 2017.
Computer-assisted instruction and diagnosis of radiographic findings.
Harper, D; Butler, C; Hodder, R; Allman, R; Woods, J; Riordan, D
1984-04-01
Recent advances in computer technology, including high bit-density storage, digital imaging, and the ability to interface microprocessors with videodisk, create enormous opportunities in the field of medical education. This program, utilizing a personal computer, videodisk, BASIC language, a linked textfile system, and a triangulation approach to the interpretation of radiographs developed by Dr. W. L. Thompson, can enable the user to engage in a user-friendly, dynamic teaching program in radiology, applicable to various levels of expertise. Advantages include a relatively more compact and inexpensive system with rapid access and ease of revision which requires little instruction to the user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Alexander S.; Bryantsev, Vyacheslav S.
Uranium is used as the basic fuel for nuclear power plants, which generate significant amounts of electricity and have life cycle carbon emissions that are as low as renewable energy sources. However, the extraction of this valuable energy commodity from the ground remains controversial, mainly because of environmental and health impacts. Alternatively, seawater offers an enormous uranium resource that may be tapped at minimal environmental cost. Nowadays, amidoxime polymers are the most widely utilized sorbent materials for large-scale extraction of uranium from seawater, but they are not perfectly selective for uranyl, UO 2 2+. In particular, the competition between UOmore » 2 2+ and VO 2+/VO2+ cations poses a significant challenge to the effi-cient mining of UO 2 2+. Thus, screening and rational design of more selective ligands must be accomplished. One of the key components in achieving this goal is the establishment of computational techniques capable of assessing ligand selec-tivity trends. Here, we report an approach based on quantum chemical calculations that achieves high accuracy in repro-ducing experimental aqueous stability constants for VO 2+/VO 2+ complexes with ten different oxygen donor lig-ands. The predictive power of the developed computational protocol was demonstrated for amidoxime-type ligands, providing greater insights into new design strategies for the development of the next generation of adsorbents with high selectivity toward UO 2 2+over VO 2+/VO 2+ ions. Furthermore, the results of calculations suggest that alkylation of amidox-ime moieties present in poly(acrylamidoxime) sorbents can be a potential route to better discrimination between the uranyl and competing vanadium ions within seawater.« less
The Military in Disaster Relief After the Explosion in Halifax, Nova Scotia, December 1917
2017-06-09
Scotia. The blast had one- sixth the power of the first atomic bomb and killed or wounded 20 percent of the Halifax population. The enormous ensuing...in Halifax, Nova Scotia. The blast had one-sixth the power of the first atomic bomb and killed or wounded 20 percent of the Halifax population. The...Simpson and Alan Ruffman, “Explosions, Bombs , and Bumps: Scientific Aspects of the Explosion,” in Ground Zero: A Reassessment of the 1917 Explosion in
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drewmark Communications; Sartor, Dale; Wilson, Mark
2010-07-01
High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.
Computational protein design: a review
NASA Astrophysics Data System (ADS)
Coluzza, Ivan
2017-04-01
Proteins are one of the most versatile modular assembling systems in nature. Experimentally, more than 110 000 protein structures have been identified and more are deposited every day in the Protein Data Bank. Such an enormous structural variety is to a first approximation controlled by the sequence of amino acids along the peptide chain of each protein. Understanding how the structural and functional properties of the target can be encoded in this sequence is the main objective of protein design. Unfortunately, rational protein design remains one of the major challenges across the disciplines of biology, physics and chemistry. The implications of solving this problem are enormous and branch into materials science, drug design, evolution and even cryptography. For instance, in the field of drug design an effective computational method to design protein-based ligands for biological targets such as viruses, bacteria or tumour cells, could give a significant boost to the development of new therapies with reduced side effects. In materials science, self-assembly is a highly desired property and soon artificial proteins could represent a new class of designable self-assembling materials. The scope of this review is to describe the state of the art in computational protein design methods and give the reader an outline of what developments could be expected in the near future.
Space and Power in the Ivory Tower: Effective Space Management and Decision Making
ERIC Educational Resources Information Center
Blanchette, Sandra
2012-01-01
At a time when there are enormous economic pressures on campuses to use resources effectively, space being one of these resources, the academic culture of shared governance, with its fragmented roles for decision making, presents additional challenges. These roles are fragmented due to independent faculty and administrative action. They are…
Harnessing the Power of Information Technology: Open Business Models in Higher Education
ERIC Educational Resources Information Center
Sheets, Robert G.; Crawford, Stephen
2012-01-01
Higher education is under enormous pressure to improve outcomes and reduce costs. Information technology can help achieve these goals, but only if it is properly harnessed. This article argues that one key to harnessing information technology is business model innovation that results in more "open" and "unbundled" operations in learning and…
A Powerful Tool: Writing Based on Knowledge and Understanding
ERIC Educational Resources Information Center
Ginty, Eloise; Hawkins, Joanna; Kurzman, Karen; Leddy, Diana; Miller, Jane
2016-01-01
The National Writing Project (NWP) has contributed enormously and consistently to the effort to help teachers help students learn to write. In the early 1970s, researchers such as Donald Graves and Janet Emig began studying the ways writers go about the task of thinking and producing polished writing. The NWP's book "Because Writing…
Feature Selection with Missing Data
ERIC Educational Resources Information Center
Sarkar, Saurabh
2013-01-01
In the modern world information has become the new power. An increasing amount of efforts are being made to gather data, resources being allocated, time being invested and tools being developed. Data collection is no longer a myth; however, it remains a great challenge to create value out of the enormous data that is being collected. Data modeling…
ERIC Educational Resources Information Center
Eichman, Bruce W.
2013-01-01
Organizational executives are concerned with the insufficient alignment of Information Technology (IT) investments to achieve computed based information systems effectiveness. Survey results of senior executives determined that in spite of applying enormous amounts of resources and energy attempting to prioritize and effectively align these…
Career Repertoires of IT Students: A Group Counselling Case Study in Higher Education
ERIC Educational Resources Information Center
Penttinen, Leena; Vesisenaho, Mikko
2013-01-01
Uncertainty about future career prospects has increased enormously for students enrolled in higher education Information Technology (IT) programs. However, many computer science programmes pay little attention to career counselling. This article reports the results of a pilot study intended to develop group counselling for IT students to promote…
A Case Study in CAD Design Automation
ERIC Educational Resources Information Center
Lowe, Andrew G.; Hartman, Nathan W.
2011-01-01
Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…
The Eras and Trends of Automatic Short Answer Grading
ERIC Educational Resources Information Center
Burrows, Steven; Gurevych, Iryna; Stein, Benno
2015-01-01
Automatic short answer grading (ASAG) is the task of assessing short natural language responses to objective questions using computational methods. The active research in this field has increased enormously of late with over 80 papers fitting a definition of ASAG. However, the past efforts have generally been ad-hoc and non-comparable until…
Exploring the Integration of Data Mining and Data Visualization
ERIC Educational Resources Information Center
Zhang, Yi
2011-01-01
Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be…
1983-06-03
Agriculture Construction and Related Industries Consumer Goods and Domestic Trade Economic Affairs Energy Human Resources International Economic...Relations Transportation Physics and Mathmetics Space Space Biology and Aerospace Medicine Military Affairs Chemistry Cybernetics, Computers...tons of "black , it is necessary to mention the n enormous contribution to the atar ASSR’s oil industry. The iation /Tatar ASSR Production
India and the 21st Century Power Partnership; 21st Century Power Partnership
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Indian leadership has been crucial in both the establishment of the foundational capabilities of the Power Partnership as well as the realization of opportunities for applied policy implementation. Understanding policy regimes that support and accelerate the transition to clean, smart, efficient, affordable, and reliable electricity will help inform decisions that enable innovative solutions, or even disruptive innovations. India has already demonstrated a wide variety of business model innovation in a wide range of industries. Its potential to do the same in the power sector is an enormous opportunity. Indian contributions to international knowledge about the legal, regulatory and business environmentmore » will be an integral component of the Power Partnership program.« less
EPA RE-Powering Mapper Completed Installations
The U.S. Environmental Protection Agency (EPA) Office of Land and Emergency Management (OLEM) Office of Communications, Partnerships and Analysis (OCPA) initiated the RE-Powering America's Land Initiative to demonstrate the enormous potential that contaminated lands, landfills, and mine sites provide for developing renewable energy in the United States. Using publically available information, RE-Powering maintains a list of completed renewable energy installations on contaminated sites and landfills. To date, the RE-Powering Initiative has identified 179 renewable energy installations on 171 contaminated lands, landfills, and mine sites, with a cumulative installed capacity of just over 1,124 megawatts (MW) and consistent growth in total installations since the inception of the RE-Powering Initiative. This dataset is current as of April 2016.
Genome-Wide Analysis of Gene-Gene and Gene-Environment Interactions Using Closed-Form Wald Tests.
Yu, Zhaoxia; Demetriou, Michael; Gillen, Daniel L
2015-09-01
Despite the successful discovery of hundreds of variants for complex human traits using genome-wide association studies, the degree to which genes and environmental risk factors jointly affect disease risk is largely unknown. One obstacle toward this goal is that the computational effort required for testing gene-gene and gene-environment interactions is enormous. As a result, numerous computationally efficient tests were recently proposed. However, the validity of these methods often relies on unrealistic assumptions such as additive main effects, main effects at only one variable, no linkage disequilibrium between the two single-nucleotide polymorphisms (SNPs) in a pair or gene-environment independence. Here, we derive closed-form and consistent estimates for interaction parameters and propose to use Wald tests for testing interactions. The Wald tests are asymptotically equivalent to the likelihood ratio tests (LRTs), largely considered to be the gold standard tests but generally too computationally demanding for genome-wide interaction analysis. Simulation studies show that the proposed Wald tests have very similar performances with the LRTs but are much more computationally efficient. Applying the proposed tests to a genome-wide study of multiple sclerosis, we identify interactions within the major histocompatibility complex region. In this application, we find that (1) focusing on pairs where both SNPs are marginally significant leads to more significant interactions when compared to focusing on pairs where at least one SNP is marginally significant; and (2) parsimonious parameterization of interaction effects might decrease, rather than increase, statistical power. © 2015 WILEY PERIODICALS, INC.
Hassanzadeh, Parichehr; Atyabi, Fatemeh; Dinarvand, Rassoul
2017-08-01
The limited efficiency of the current treatment options against the central nervous system (CNS) disorders has created increasing demands towards the development of novel theranostic strategies. The enormous research efforts in nanotechnology have led to the production of highly-advanced nanodevices and biomaterials in a variety of geometries and configurations for targeted delivery of genes, drugs, or growth factors across the blood-brain barrier. Meanwhile, the richness or reliability of data, drug delivery methods, therapeutic effects or potential toxicity of nanoparticles, occurrence of the unexpected phenomena due to the polydisperse or polymorphic nature of nanomaterials, and personalized theranostics have remained as challenging issues. In this respect, computational modelling has emerged as a powerful tool for rational design of nanoparticles with optimized characteristics including the selectivity, improved bioactivity, and reduced toxicity that might lead to the effective delivery of therapeutic agents. High-performance simulation techniques by shedding more light on the dynamical behaviour of neural networks and pathomechanisms of CNS disorders may provide imminent breakthroughs in nanomedicine. In the present review, the importance of integration of nanotechnology-based approaches with computational techniques for targeted delivery of theranostics to the CNS has been highlighted. Copyright © 2017. Published by Elsevier Inc.
3D reconstruction of cystoscopy videos for comprehensive bladder records
Lurie, Kristen L.; Angst, Roland; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.
2017-01-01
White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize a 3D model of organs from an endoscopic video that captures the shape and surface appearance of the organ. A key aspect of our strategy is the use of advanced computer vision techniques and unmodified, clinical-grade endoscopy hardware with few constraints on the image acquisition protocol, which presents a low barrier to clinical translation. We validate the accuracy and robustness of our reconstruction and co-registration method using cystoscopy videos from tissue-mimicking bladder phantoms and show clinical utility during cystoscopy in the operating room for bladder cancer evaluation. As our method can powerfully augment the visual medical record of the appearance of internal organs, it is broadly applicable to endoscopy and represents a significant advance in cancer surveillance opportunities for big-data cancer research. PMID:28736658
Deo, Rahul C.
2015-01-01
Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games – tasks which would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in healthcare. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades – and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. PMID:26572668
Liberty Versus Life: Suicide in the Writings of Montesquieu.
Cantrell, Cheryl
2015-01-01
Charles-Louis de Secondat, Baron de la Bréde et de Montesquieu (1689-1755), the French philosopher who had such an enormous impact on the American constitution through his theory of the separation of powers, had an unusually sympathetic view of suicide. Indeed, he is the only major thinker in Western history to have produced a sustained argument against St. Thomas Aquinas' enormously influential views on this subject. Yet few scholars have attempted to analyze this argument, and none to explain why it was so important to him to make it. This paper demonstrates that Montesquieu's support for suicide in desperate circumstances is inextricably associated with the love of liberty for which he is justly celebrated, having the potential to radically transform the way we look at suicide and suicidal ideation today.
An Introduction to Distributions Using Weighted Dice
ERIC Educational Resources Information Center
Holland, Bart K.
2011-01-01
Distributions are the basis for an enormous amount of theoretical and applied work in statistics. While there are formal definitions of distributions and many formulas to characterize them, it is important that students at first get a clear introduction to this basic concept. For many of them, neither words nor formulas can match the power of a…
The Roots of the Right-Wing Attack on Higher Education
ERIC Educational Resources Information Center
Schrecker, Ellen
2010-01-01
The enormous changes that took place on American campuses during the 1960s not only opened those campuses to new constituencies and new ideas, but also created a powerful conservative movement that sought to reverse those changes. Along with the rising cost of higher education, the right's campaign against the academic reforms of the sixties has…
Renewable energy production is expected to increase significantly in the next 25 years. The U.S. Environmental Protection Agency (EPA) Office of Solid Waste and Emergency Response (OSWER) Center for Program Analysis (OCPA) has initiated the RE-Powering America's Land Initiative to demonstrate the enormous potential that contaminated land and mining sites provide for developing renewable energy in the U.S.
J. Edgar Hoover and the Black Press in World War II.
ERIC Educational Resources Information Center
Washburn, Patrick S.
Holding enormous if controversial power as Director of the Federal Bureau of Investigation (FBI), J. Edgar Hoover was sometimes controlled unexpectedly at the highest reaches of government, as illustrated by his failed attempt to obtain an Espionage Act indictment against the black press during World War II. Following anarchist bombings in 1919,…
Equalisation or Inflation? Social Class and Gender Differentials in England and Wales
ERIC Educational Resources Information Center
Sullivan, Alice; Heath, Anthony; Rothon, Catherine
2011-01-01
The Labour government elected in 1997, which lost power in 2010, was the longest serving Labour administration Britain has ever had. This period saw an enormous expansion of further and higher education, and an increase in the proportion of students achieving school-level qualifications. But have inequalities diminished as a result? We examine the…
Are we there yet? Tracking the development of new model systems
A. Abzhanov; C. Extavour; A. Groover; S. Hodges; H. Hoekstra; E. Kramer; A. Monteiro
2008-01-01
It is increasingly clear that additional âmodelâ systems are needed to elucidate the genetic and developmental basis of organismal diversity. Whereas model system development previously required enormous investment, recent advances including the decreasing cost of DNA sequencing and the power of reverse genetics to study gene function are greatly facilitating...
Online production validation in a HEP environment
NASA Astrophysics Data System (ADS)
Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.
2017-03-01
In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.
ERIC Educational Resources Information Center
Dinov, Ivo D.; Sanchez, Juana; Christou, Nicolas
2008-01-01
Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and…
Virtualization in education: Information Security lab in your hands
NASA Astrophysics Data System (ADS)
Karlov, A. A.
2016-09-01
The growing demand for qualified specialists in advanced information technologies poses serious challenges to the education and training of young personnel for science, industry and social problems. Virtualization as a way to isolate the user from the physical characteristics of computing resources (processors, servers, operating systems, networks, applications, etc.), has, in particular, an enormous influence in the field of education, increasing its efficiency, reducing the cost, making it more widely and readily available. The study of Information Security of computer systems is considered as an example of use of virtualization in education.
CLAST: CUDA implemented large-scale alignment search tool.
Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken
2014-12-11
Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.
Ivanov, Alexander S.; Bryantsev, Vyacheslav S.
2016-06-06
Uranium is used as the basic fuel for nuclear power plants, which generate significant amounts of electricity and have life cycle carbon emissions that are as low as renewable energy sources. However, the extraction of this valuable energy commodity from the ground remains controversial, mainly because of environmental and health impacts. Alternatively, seawater offers an enormous uranium resource that may be tapped at minimal environmental cost. Nowadays, amidoxime polymers are the most widely utilized sorbent materials for large-scale extraction of uranium from seawater, but they are not perfectly selective for uranyl, UO 2 2+. In particular, the competition between UOmore » 2 2+ and VO 2+/VO2+ cations poses a significant challenge to the effi-cient mining of UO 2 2+. Thus, screening and rational design of more selective ligands must be accomplished. One of the key components in achieving this goal is the establishment of computational techniques capable of assessing ligand selec-tivity trends. Here, we report an approach based on quantum chemical calculations that achieves high accuracy in repro-ducing experimental aqueous stability constants for VO 2+/VO 2+ complexes with ten different oxygen donor lig-ands. The predictive power of the developed computational protocol was demonstrated for amidoxime-type ligands, providing greater insights into new design strategies for the development of the next generation of adsorbents with high selectivity toward UO 2 2+over VO 2+/VO 2+ ions. Furthermore, the results of calculations suggest that alkylation of amidox-ime moieties present in poly(acrylamidoxime) sorbents can be a potential route to better discrimination between the uranyl and competing vanadium ions within seawater.« less
The Rise of Women: The Growing Gender Gap in Education and What It Means for American Schools
ERIC Educational Resources Information Center
DiPrete, Thomas A.; Buchmann, Claudia
2013-01-01
While powerful gender inequalities remain in American society, women have made substantial gains and now largely surpass men in one crucial arena: education. Women now outperform men academically at all levels of school, and are more likely to obtain college degrees and enroll in graduate school. What accounts for this enormous reversal in the…
ERIC Educational Resources Information Center
DAVENPORT, ROY K.; AND OTHERS
A JOINT CONFERENCE OF PERSONNEL CONCERNED WITH EDUCATION AND TRAINING OF MANPOWER IN THE DEPARTMENT OF DEFENSE (DOD), OFFICE OF EDUCATION, AND THE NATIONAL SECURITY INDUSTRIAL ASSOCIATION, WAS CALLED BY THE DOD TO CONSIDER HOW THE THREE ORGANIZATIONS COULD COLLABORATE. THE DOD PROPOSED USING THE ENORMOUS POWER OF ITS PROCUREMENT SYSTEM TO…
Microwave Plasma Based Single-Step Method for Generation of Carbon Nanostructures
2013-07-01
Técnico, Technical University of Lisbon, Portugal 2 Mechanical and Aerospace Engeneering , Naval Postgraduate School, Monterey, CA 93943, U.S.A...Plasma environments constitute powerful tools in materials science due to their operation as thermal and chemical reactors. A microwave, atmospheric...applications include electronic devices, transparent conductive films, mechanical devices, chemical sensors, spintronic devices. Moreover, it shows enormous
High-throughput biological techniques, like microarrays and drug screens, generate an enormous amount of data that may be critically important for cancer researchers and clinicians. Being able to manipulate the data to extract those pieces of interest, however, can require computational or bioinformatics skills beyond those of the average scientist.
ERIC Educational Resources Information Center
Perera, Indika
2010-01-01
ICT (information and communication technologies) add enormous approaches to utilize computing into users' daily lives. Every aspect of social needs has been touched by ICT, including learning. VL (virtual learning), with the life span of slightly above a decade, still looks for possible approaches to enhance its functions with significant pressure…
Addressing the Digital Divide in Contemporary Biology: Lessons from Teaching UNIX.
Mangul, Serghei; Martin, Lana S; Hoffmann, Alexander; Pellegrini, Matteo; Eskin, Eleazar
2017-10-01
Life and medical science researchers increasingly rely on applications that lack a graphical interface. Scientists who are not trained in computer science face an enormous challenge analyzing high-throughput data. We present a training model for use of command-line tools when the learner has little to no prior knowledge of UNIX. Copyright © 2017 Elsevier Ltd. All rights reserved.
Better Education at Ishik University Preparatory School with Extracurricular Activities
ERIC Educational Resources Information Center
Yildiz, Yunus
2015-01-01
It cannot be said that education today in institutions is better than the previous century. Because in the past, students' minds were not as full of time-consuming things like spending enormous time in front of a computer or a television as today. Subsequently, teachers used to concentrate their job well and students used to focus on the study…
Modeling Criminal Activity in Urban Landscapes
NASA Astrophysics Data System (ADS)
Brantingham, Patricia; Glässer, Uwe; Jackson, Piper; Vajihollahi, Mona
Computational and mathematical methods arguably have an enormous potential for serving practical needs in crime analysis and prevention by offering novel tools for crime investigations and experimental platforms for evidence-based policy making. We present a comprehensive formal framework and tool support for mathematical and computational modeling of criminal behavior to facilitate systematic experimental studies of a wide range of criminal activities in urban environments. The focus is on spatial and temporal aspects of different forms of crime, including opportunistic and serial violent crimes. However, the proposed framework provides a basis to push beyond conventional empirical research and engage the use of computational thinking and social simulations in the analysis of terrorism and counter-terrorism.
The influence of large-scale wind power on global climate.
Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J
2004-11-16
Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.
NASA Astrophysics Data System (ADS)
Gerjuoy, Edward
2005-06-01
The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.
Energy Fluctuations Shape Free Energy of Nonspecific Biomolecular Interactions
NASA Astrophysics Data System (ADS)
Elkin, Michael; Andre, Ingemar; Lukatsky, David B.
2012-01-01
Understanding design principles of biomolecular recognition is a key question of molecular biology. Yet the enormous complexity and diversity of biological molecules hamper the efforts to gain a predictive ability for the free energy of protein-protein, protein-DNA, and protein-RNA binding. Here, using a variant of the Derrida model, we predict that for a large class of biomolecular interactions, it is possible to accurately estimate the relative free energy of binding based on the fluctuation properties of their energy spectra, even if a finite number of the energy levels is known. We show that the free energy of the system possessing a wider binding energy spectrum is almost surely lower compared with the system possessing a narrower energy spectrum. Our predictions imply that low-affinity binding scores, usually wasted in protein-protein and protein-DNA docking algorithms, can be efficiently utilized to compute the free energy. Using the results of Rosetta docking simulations of protein-protein interactions from Andre et al. (Proc. Natl. Acad. Sci. USA 105:16148, 2008), we demonstrate the power of our predictions.
Gaspar, Paula; Carvalho, Ana L; Vinga, Susana; Santos, Helena; Neves, Ana Rute
2013-11-01
The lactic acid bacteria (LAB) are a functionally related group of low-GC Gram-positive bacteria known essentially for their roles in bioprocessing of foods and animal feeds. Due to extensive industrial use and enormous economical value, LAB have been intensively studied and a large body of comprehensive data on their metabolism and genetics was generated throughout the years. This knowledge has been instrumental in the implementation of successful applications in the food industry, such as the selection of robust starter cultures with desired phenotypic traits. The advent of genomics, functional genomics and high-throughput experimentation combined with powerful computational tools currently allows for a systems level understanding of these food industry workhorses. The technological developments in the last decade have provided the foundation for the use of LAB in applications beyond the classic food fermentations. Here we discuss recent metabolic engineering strategies to improve particular cellular traits of LAB and to design LAB cell factories for the bioproduction of added value chemicals. Copyright © 2013 Elsevier Inc. All rights reserved.
Solar thermal electric power plants - Their performance characteristics and total social costs
NASA Technical Reports Server (NTRS)
Caputo, R. S.; Truscello, V. C.
1976-01-01
The central receiver (power tower) concept as a thermal conversion approach to the conversion of solar energy into electricity is compared to other solar power plant designs which feature distributed solar collection and use other types of solar collector configurations. A variety of solar thermal storage concepts are discussed and their impacts on system performance are assessed. Although a good deal of quantification is possible in a comparative study, the subjective judgments carry enormous weight in a socio-economic decision, the ultimate choice of central power plant being more a social than an economic or technical decision. Major elements of the total social cost of each type of central plant are identified as utility economic costs, R&D funds, health costs, and other relevant social impacts.
paraGSEA: a scalable approach for large-scale gene expression profiling
Peng, Shaoliang; Yang, Shunyun
2017-01-01
Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463
NASA Astrophysics Data System (ADS)
Kalantari, Bahman
Polynomiography is the algorithmic visualization of iterative systems for computing roots of a complex polynomial. It is well known that iterations of a rational function in the complex plane result in chaotic behavior near its Julia set. In one scheme of computing polynomiography for a given polynomial p(z), we select an individual member from the Basic Family, an infinite fundamental family of rational iteration functions that in particular include Newton's. Polynomiography is an excellent means for observing, understanding, and comparing chaotic behavior for variety of iterative systems. Other iterative schemes in polynomiography are possible and result in chaotic behavior of different kinds. In another scheme, the Basic Family is collectively applied to p(z) and the iterates for any seed in the Voronoi cell of a root converge to that root. Polynomiography reveals chaotic behavior of another kind near the boundary of the Voronoi diagram of the roots. We also describe a novel Newton-Ellipsoid iterative system with its own chaos and exhibit images demonstrating polynomiographies of chaotic behavior of different kinds. Finally, we consider chaos for the more general case of polynomiography of complex analytic functions. On the one hand polynomiography is a powerful medium capable of demonstrating chaos in different forms, it is educationally instructive to students and researchers, also it gives rise to numerous research problems. On the other hand, it is a medium resulting in images with enormous aesthetic appeal to general audiences.
Deo, Rahul C
2015-11-17
Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games - tasks that would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in health care. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades, and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus, part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. © 2015 American Heart Association, Inc.
A prototype system based on visual interactive SDM called VGC
NASA Astrophysics Data System (ADS)
Jia, Zelu; Liu, Yaolin; Liu, Yanfang
2009-10-01
In many application domains, data is collected and referenced by its geo-spatial location. Spatial data mining, or the discovery of interesting patterns in such databases, is an important capability in the development of database systems. Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. For spatial data mining of large data sets to be effective, it is also important to include humans in the data exploration process and combine their flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today's computers. Visual spatial data mining applies human visual perception to the exploration of large data sets. Presenting data in an interactive, graphical form often fosters new insights, encouraging the information and validation of new hypotheses to the end of better problem-solving and gaining deeper domain knowledge. In this paper a visual interactive spatial data mining prototype system (visual geo-classify) based on VC++6.0 and MapObject2.0 are designed and developed, the basic algorithms of the spatial data mining is used decision tree and Bayesian networks, and data classify are used training and learning and the integration of the two to realize. The result indicates it's a practical and extensible visual interactive spatial data mining tool.
ERIC Educational Resources Information Center
Rios-Aguilar, Cecilia
2012-01-01
While higher education institutions seem to be utilizing social media more and more, there still exist enormous challenges in trying to understand the new dynamics generated by social media in higher education, particularly for the context of community colleges. In March 2011, the author and her colleague Dr. Regina Deil-Amen (Associate Professor…
1981-10-01
HD-RI37 364 THE EFFECT OF BLOWSAND REDUCTION ON THE ABUNDANCE OF 1/2 THE FRINGE-TOED LIZA..(U) CALIFORNIA UNIV LOS RNGELES LA B OF NUCLER MEDICINE...accomplished, but the idea is worth testing. The added power of more than two samples is enormous in improving capture-recapture estimates of numbers. A chain
Translations on USSR Resources, Number 767.
1978-01-19
photography and so on). The amount of data obtained as a result of additional surveys makes it possible to evaluate the intensity and configuration...machine tools , chemical products, refrigerators, as well as potatoes and products of livestock breeding. The Kazakh SSR made an enormous leap in its...of the fuel and water power resources of Georgia, Azerbaydzhan and Armenia. Petroleum, transport and electrical machine building, machine tool
Role of computer-based learning in tooth carving in dentistry: An Indian perspective.
Juneja, Saurabh; Juneja, Manjushree
2016-01-01
Tooth carving is an important practical preclinical exercise in the curriculum in Indian dental education setup. It forms the basis of introduction to tooth anatomy, morphology and occlusion of primary and permanent teeth through practical approach. It requires enormous time and manpower to master the skill. Therefore, there is an imminent necessity to incorporate computer-based learning of the art of tooth carving for effective teaching and efficient student learning. This will ensure quality time to be spent on other academic and research activities by students and faculty in addition to adding value as a teaching aid.
3D data processing with advanced computer graphics tools
NASA Astrophysics Data System (ADS)
Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott
2012-09-01
Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.
Optimizing R with SparkR on a commodity cluster for biomedical research.
Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan
2016-12-01
Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao
2018-05-23
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.
Radiolytic Gas-Driven Cryovolcanism in the Outer Solar System
NASA Technical Reports Server (NTRS)
Cooper, John F.; Cooper, Paul D.; Sittler, Edward C.; Sturner, Steven J.; Rymer, Abigail M.; Hill, Matthew E.
2007-01-01
Water ices in surface crusts of Europa, Enceladus, Saturn's main rings, and Kuiper Belt Objects can become heavily oxidized from radiolytic chemical alteration of near-surface water ice by space environment irradiation. Oxidant accumulations and gas production are manifested in part through observed H2O2 on Europa. tentatively also on Enceladus, and found elsewhere in gaseous or condensed phases at moons and rings of Jupiter and Saturn. On subsequent chemical contact in sub-surface environments with significant concentrations of primordially abundant reductants such as NH3 and CH4, oxidants of radiolytic origin can react exothermically to power gas-driven cryovolcanism. The gas-piston effect enormously amplifies the mass flow output in the case of gas formation at basal thermal margins of incompressible fluid reservoirs. Surface irradiation, H2O2 production, NH3 oxidation, and resultant heat, gas, and gas-driven mass flow rates are computed in the fluid reservoir case for selected bodies. At Enceladus the oxidant power inputs are comparable to limits on nonthermal kinetic power for the south polar plumes. Total heat output and plume gas abundance may be accounted for at Enceladus if plume activity is cyclic in high and low "Old Faithful" phases, so that oxidants can accumulate during low activity phases. Interior upwelling of primordially abundant NH3 and CH4 hydrates is assumed to resupply the reductant fuels. Much lower irradiation fluxes on Kuiper Belt Objects require correspondingly larger times for accumulation of oxidants to produce comparable resurfacing, but brightness and surface composition of some objects suggest that such activity may be ongoing.
Zeng, Ping; Mukherjee, Sayan; Zhou, Xiang
2017-01-01
Epistasis, commonly defined as the interaction between multiple genes, is an important genetic component underlying phenotypic variation. Many statistical methods have been developed to model and identify epistatic interactions between genetic variants. However, because of the large combinatorial search space of interactions, most epistasis mapping methods face enormous computational challenges and often suffer from low statistical power due to multiple test correction. Here, we present a novel, alternative strategy for mapping epistasis: instead of directly identifying individual pairwise or higher-order interactions, we focus on mapping variants that have non-zero marginal epistatic effects—the combined pairwise interaction effects between a given variant and all other variants. By testing marginal epistatic effects, we can identify candidate variants that are involved in epistasis without the need to identify the exact partners with which the variants interact, thus potentially alleviating much of the statistical and computational burden associated with standard epistatic mapping procedures. Our method is based on a variance component model, and relies on a recently developed variance component estimation method for efficient parameter inference and p-value computation. We refer to our method as the “MArginal ePIstasis Test”, or MAPIT. With simulations, we show how MAPIT can be used to estimate and test marginal epistatic effects, produce calibrated test statistics under the null, and facilitate the detection of pairwise epistatic interactions. We further illustrate the benefits of MAPIT in a QTL mapping study by analyzing the gene expression data of over 400 individuals from the GEUVADIS consortium. PMID:28746338
Kennedy, Jonathan; Marchesi, Julian R; Dobson, Alan DW
2008-01-01
Metagenomic based strategies have previously been successfully employed as powerful tools to isolate and identify enzymes with novel biocatalytic activities from the unculturable component of microbial communities from various terrestrial environmental niches. Both sequence based and function based screening approaches have been employed to identify genes encoding novel biocatalytic activities and metabolic pathways from metagenomic libraries. While much of the focus to date has centred on terrestrial based microbial ecosystems, it is clear that the marine environment has enormous microbial biodiversity that remains largely unstudied. Marine microbes are both extremely abundant and diverse; the environments they occupy likewise consist of very diverse niches. As culture-dependent methods have thus far resulted in the isolation of only a tiny percentage of the marine microbiota the application of metagenomic strategies holds great potential to study and exploit the enormous microbial biodiversity which is present within these marine environments. PMID:18717988
Giese, Martin A; Rizzolatti, Giacomo
2015-10-07
Action recognition has received enormous interest in the field of neuroscience over the last two decades. In spite of this interest, the knowledge in terms of fundamental neural mechanisms that provide constraints for underlying computations remains rather limited. This fact stands in contrast with a wide variety of speculative theories about how action recognition might work. This review focuses on new fundamental electrophysiological results in monkeys, which provide constraints for the detailed underlying computations. In addition, we review models for action recognition and processing that have concrete mathematical implementations, as opposed to conceptual models. We think that only such implemented models can be meaningfully linked quantitatively to physiological data and have a potential to narrow down the many possible computational explanations for action recognition. In addition, only concrete implementations allow judging whether postulated computational concepts have a feasible implementation in terms of realistic neural circuits. Copyright © 2015 Elsevier Inc. All rights reserved.
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
JPRS Report, Nuclear Developments
1991-04-23
East countries. Bush once expressed Taishan. the hope of reducing arms sales to the Middle East but has never taken effective measures to weaken the flow...structure containing sand armament will bring enormous effects in strategic envi- 350 meters to 500 meters below the surface and then ronment in Northeast...underestimation of its consequences may have some Kozloduy Nuclear Power Plant which results in radioac- tragic effects on Bulgaria’s population which
Preservation and Enhancement of the American Falls at Niagara.
1975-01-01
Niagara Falls are located 19 miles downstream from Lake Erie. Goat Island divides the River into two Originally the Falls were located at the Niagara...years and permanently dewater a relatively uniform sheet of water falls over the crest. the American Falls. Luna Island located in the crest separates the...interbedded layers of limestones, dolomites, sandstones and shales. The enormous force of fallingdiversion tunnels leading to the Ontario Hydro power
Climate Change and Future World
2013-03-01
the distribution of fish 8 species.37 Increasing ocean acidification is threatening coral reefs that play an important role in mitigating the...into space the power that has not been used. This enormous thermal machine, that is the climate system, is constituted by the atmosphere, oceans ...and extension of the Arctic ice and mountain glaciers in the northern hemisphere are reducing. According to the IPCC, the 5 Arctic Ocean could be
Smart Power and U.S. National Strategy
2013-08-01
COIN strategy being used in Afghanistan a “fran- chise business and the variations among the franchises is enormous.”327 The strategy being followed...ment—i.e. force rotation and regeneration while sustaining an operational presence—at a high operational tempo and maintaining SOF-unique quali- ties...Admiral Olson’s views regarding USSOCOM’s growth in force and the increased operational tempo issues. Congressional support is being sought for
Resource-Saving Cleaning Technologies for Power Plant Waste-Water Cooling Ponds
NASA Astrophysics Data System (ADS)
Zakonnova, Lyudmila; Nikishkin, Igor; Rostovzev, Alexandr
2017-11-01
One of the frequently encountered problems of power plant small cooling ponds is rapid eutrophication and related intensified development of phytoplankton ("hyperflow") and overgrowing of ponds by higher aquatic vegetation. As a result of hyper-flowering, an enormous amount of detritus settles on the condenser tubes, reducing the efficiency of the power plant operation. The development of higher aquatic vegetation contributes to the appearing of the shoals. As a result the volume, area and other characteristics of the cooling ponds are getting changed. The article describes the environmental problems of small manmade ponds of power plants and coal mines in mining regions. Two approaches to the problem of eutrophication are considered: technological and ecological. The negative effects of herbicides application to aquatic organisms are experimentally proved. An ecological approach to solving the problem by fish-land reclamation method is shown.
Geothermal wells drilled in Transcarpathians
NASA Astrophysics Data System (ADS)
Kuzma, A.
1984-12-01
The lion's share of the Earth's electric power is known to be produced by thermal electric power plants wwich burn coal and gas. New storehouses of energy must be sought. It became known that the main reserves of heat in the Earth's interior are concentrated in rock. In simple terms, the technology of delivering the Earth's heat to the surface is as follows: water injected under high pressure from a river into one well comes in contact with hot beds situated at enormous depth, after which it returns by a second well in the form of a steam-water mixture, which then operates turbines of an electric power plant. The water would be used many times over in a closed cycle. This method promises many advantages. It will provide a possibility for generating cheap electric power while excluding all pollution of the environment.
Feasibility of large-scale power plants based on thermoelectric effects
NASA Astrophysics Data System (ADS)
Liu, Liping
2014-12-01
Heat resources of small temperature difference are easily accessible, free and enormous on the Earth. Thermoelectric effects provide the technology for converting these heat resources directly into electricity. We present designs for electricity generators based on thermoelectric effects that utilize heat resources of small temperature difference, e.g., ocean water at different depths and geothermal resources, and conclude that large-scale power plants based on thermoelectric effects are feasible and economically competitive. The key observation is that the power factor of thermoelectric materials, unlike the figure of merit, can be improved by orders of magnitude upon laminating good conductors and good thermoelectric materials. The predicted large-scale power generators based on thermoelectric effects, if validated, will have the advantages of the scalability, renewability, and free supply of heat resources of small temperature difference on the Earth.
Candidate R&D Thrusts for the Software Technology Initiative.
1981-05-01
computer-aided design and manufacturing efforts provide examples of multiple representations and multiple manipulation modes. R&D difficulties exist in...farfetched, but the potential payoffs are enormous. References Birk, J., and R. Kelley. Research Needed to Advance the State of Knowledge in Robotics . In...and specifica- tion languages would be benefical . This R&D effort may also result in fusion with management tools with which an acquisition manager
OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing
NASA Astrophysics Data System (ADS)
Strayer, Michael
2005-01-01
Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations with industry and virtual prototyping. New instruments of collaboration will include institutes and centers while summer schools, workshops and outreach will invite new talent and expertise. Computational science adds new dimensions to science and its practice. Disciplines of fusion, accelerator science, and combustion are poised to blur the boundaries between pure and applied science. As we open the door into FY2006 we shall see a landscape of new scientific challenges: in biology, chemistry, materials, and astrophysics to name a few. The enabling technologies of SciDAC have been transformational as drivers of change. Planning for major new software systems assumes a base line employing Common Component Architectures and this has become a household word for new software projects. While grid algorithms and mesh refinement software have transformed applications software, data management and visualization have transformed our understanding of science from data. The Gordon Bell prize now seems to be dominated by computational science and solvers developed by TOPS ISIC. The priorities of the Office of Science in the Department of Energy are clear. The 20 year facilities plan is driven by new science. High performance computing is placed amongst the two highest priorities. Moore's law says that by the end of the next cycle of SciDAC we shall have peta-flop computers. The challenges of petascale computing are enormous. These and the associated computational science are the highest priorities for computing within the Office of Science. Our effort in Leadership Class computing is just a first step towards this goal. Clearly, computational science at this scale will face enormous challenges and possibilities. Performance evaluation and prediction will be critical to unraveling the needed software technologies. We must not lose sight of our overarching goal—that of scientific discovery. Science does not stand still and the landscape of science discovery and computing holds immense promise. In this environment, I believe it is necessary to institute a system of science based performance metrics to help quantify our progress towards science goals and scientific computing. As a final comment I would like to reaffirm that the shifting landscapes of science will force changes to our computational sciences, and leave you with the quote from Richard Hamming, 'The purpose of computing is insight, not numbers'.
NASA Technical Reports Server (NTRS)
Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.
1992-01-01
How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2016-12-01
Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.
Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures
NASA Astrophysics Data System (ADS)
Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.
2014-12-01
In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.
NASA Astrophysics Data System (ADS)
Dasgupta, Sambarta
Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of sensitivity analysis of power system, subjected to changes or uncertainty in load parameters and network topology, is also studied using the theory of normally hyperbolic manifolds. The sensitivity analysis is used for the identification and rank ordering of the critical interactions and parameters in the power network. The sensitivity analysis is carried out both in finite time and in asymptotic. One of the distinguishing features of the asymptotic sensitivity analysis is that the asymptotic dynamics of the system is assumed to be a periodic orbit. For asymptotic sensitivity analysis we employ combination of tools from ergodic theory and geometric theory of dynamical systems.
The Accident at Fukushima: What Happened?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujie, Takao
At 2:46 PM, on the coast of the Pacific Ocean in eastern Japan, people were spending an ordinary afternoon. The earthquake had a magnitude of 9.0, the fourth largest ever recorded in the world. Avery large number of aftershocks were felt after the initial earthquake. More than 100 of them had a magnitude of over 6.0. There were very few injured or dead at this point. The large earthquake caused by this enormous crustal deformation spawned a rare and enormous tsunami that crashed down 30-40 minutes later. It easily cleared the high levees, washing away cars and houses and swallowingmore » buildings of up to three stories in height. The largest tsunami reading taken from all regions was 40 meters in height. This tsunami reached the West Coast of the United States and the Pacific coast of South America, with wave heights of over two meters. It was due to this tsunami that the disaster became one of a not imaginable scale, which saw the number of dead or missing reach about 20,000 persons. The enormous tsunami headed for 15 nuclear power plants on the Pacific coast, but 11 power plants withstood the tsunami and attained cold shutdown. The flood height of the tsunami that struck each power station ranged to a maximum of 15 meters. The Fukushima Daiichi Nuclear Power Plant Units experienced the largest and the cores of three reactors suffered meltdown. As a result, more than 160,000 residents were forced to evacuate, and are still living in temporary accommodation. The main focus of this presentation is on what happened at the Fukushima Daiichi, and how station personnel responded to the accident, with considerable international support. A year after the Fukushima Daiichi accident, Japan is in the process of leveraging the lessons learned from the accident to further improve the safety of nuclear power facilities and regain the trust of society. In this connection, not only international organizations, including IAEA, and WANO, but also governmental organizations and nuclear industry representatives from various countries, have been evaluating what happened at Fukushima Daiichi. Support from many countries has contributed to successfully stabilizing the Fukushima Daiichi Nuclear Power Station. International cooperation is required as Japan started along the long road to decommissioning the reactors. Such cooperation with the international community would achieve the decommissioning of the damaged reactors. Finally, recovery plans by the Japanese government to decontaminate surrounding regions have been started in order to get residents back to their homes as early as possible. Looking at the world's nuclear power industry, there are currently approximately 440 reactors in operation and 60 under construction. Despite the dramatic consequences of the Fukushima Daiichi catastrophe it is expected that the importance of nuclear power generation will not change in the years to come. Newly accumulated knowledge and capabilities must be passed on to the next generation. This is the duty put upon us and which is one that we must embrace.« less
Some recent applications of Navier-Stokes codes to rotorcraft
NASA Technical Reports Server (NTRS)
Mccroskey, W. J.
1992-01-01
Many operational limitations of helicopters and other rotary-wing aircraft are due to nonlinear aerodynamic phenomena incuding unsteady, three-dimensional transonic and separated flow near the surfaces and highly vortical flow in the wakes of rotating blades. Modern computational fluid dynamics (CFD) technology offers new tools to study and simulate these complex flows. However, existing Euler and Navier-Stokes codes have to be modified significantly for rotorcraft applications, and the enormous computational requirements presently limit their use in routine design applications. Nevertheless, the Euler/Navier-Stokes technology is progressing in anticipation of future supercomputers that will enable meaningful calculations to be made for complete rotorcraft configurations.
Tables of square-law signal detection statistics for Hann spectra with 50 percent overlap
NASA Technical Reports Server (NTRS)
Deans, Stanley R.; Cullers, D. Kent
1991-01-01
The Search for Extraterrestrial Intelligence, currently being planned by NASA, will require that an enormous amount of data be analyzed in real time by special purpose hardware. It is expected that overlapped Hann data windows will play an important role in this analysis. In order to understand the statistical implication of this approach, it has been necessary to compute detection statistics for overlapped Hann spectra. Tables of signal detection statistics are given for false alarm rates from 10(exp -14) to 10(exp -1) and signal detection probabilities from 0.50 to 0.99; the number of computed spectra ranges from 4 to 2000.
EMRlog method for computer security for electronic medical records with logic and data mining.
Martínez Monterrubio, Sergio Mauricio; Frausto Solis, Juan; Monroy Borja, Raúl
2015-01-01
The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system.
EMRlog Method for Computer Security for Electronic Medical Records with Logic and Data Mining
Frausto Solis, Juan; Monroy Borja, Raúl
2015-01-01
The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system. PMID:26495300
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Armaroli, Nicola; Balzani, Vincenzo
2011-01-17
Hydrogen is often proposed as the fuel of the future, but the transformation from the present fossil fuel economy to a hydrogen economy will need the solution of numerous complex scientific and technological issues, which will require several decades to be accomplished. Hydrogen is not an alternative fuel, but an energy carrier that has to be produced by using energy, starting from hydrogen-rich compounds. Production from gasoline or natural gas does not offer any advantage over the direct use of such fuels. Production from coal by gasification techniques with capture and sequestration of CO₂ could be an interim solution. Water splitting by artificial photosynthesis, photobiological methods based on algae, and high temperatures obtained by nuclear or concentrated solar power plants are promising approaches, but still far from practical applications. In the next decades, the development of the hydrogen economy will most likely rely on water electrolysis by using enormous amounts of electric power, which in its turn has to be generated. Producing electricity by burning fossil fuels, of course, cannot be a rational solution. Hydroelectric power can give but a very modest contribution. Therefore, it will be necessary to generate large amounts of electric power by nuclear energy of by renewable energies. A hydrogen economy based on nuclear electricity would imply the construction of thousands of fission reactors, thereby magnifying all the problems related to the use of nuclear energy (e.g., safe disposal of radioactive waste, nuclear proliferation, plant decommissioning, uranium shortage). In principle, wind, photovoltaic, and concentrated solar power have the potential to produce enormous amounts of electric power, but, except for wind, such technologies are too underdeveloped and expensive to tackle such a big task in a short period of time. A full development of a hydrogen economy needs also improvement in hydrogen storage, transportation and distribution. Hydrogen and electricity can be easily interconverted by electrolysis and fuel cells, and which of these two energy carriers will prevail, particularly in the crucial field of road vehicle powering, will depend on the solutions found for their peculiar drawbacks, namely storage for electricity and transportation and distribution for hydrogen. There is little doubt that power production by renewable energies, energy storage by hydrogen, and electric power transportation and distribution by smart electric grids will play an essential role in phasing out fossil fuels. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Auditing: Perspectives from Multiperson Decision Theory.
1982-10-01
Transmission" by W.P. Rogerson 378. "Unemployment Equilibrium with Stochastic Rationing of Supplies" by Ho-mo i WU. 379. " Optimal Price and Income... rational with enormous powers of calculation. At first this may seem utterly inappropriate for the study of problems in accounting. Nevertheless, there...that at any price the investor offers he will tend to get acceptance only from an owner with assets having a lesser value. Similarly, if the owner
Biological Information Processing in Single Microtubules
2012-02-15
flux moves in the brain at a speed of 400km/hr, when electrons move only a few cm in years, we have found that through microtubule, solitons propagate... soliton ”. Introduction: We started working on the brain microtubule way back in 2008, since, I understood that in the brain, neurons separated by...6 inches, synchronize, get phase and frequency locked and that is the source of enormous computability of the brain. However, no experimental
NASA Astrophysics Data System (ADS)
Ghosh, Sujoy Kumar; Mandal, Dipankar
2017-03-01
A human interactive self-powered wearable sensor is designed using waste by-product prawn shells. The structural origin of intrinsic piezoelectric characteristics of bio-assembled chitin nanofibers has been investigated. It allows the prawn shell to make a tactile sensor that performs also as a highly durable mechanical energy harvester/nanogenerator. The feasibility and fundamental physics of self-powered consumer electronics even from human perception is highlighted by prawn shells made nanogenerator (PSNG). High fidelity and non-invasive monitoring of vital signs, such as radial artery pulse wave and coughing actions, may lead to the potential use of PSNG for early intervention. It is presumed that PSNG has enormous future aspects in real-time as well as remote health care assessment.
Power Supplies for High Energy Particle Accelerators
NASA Astrophysics Data System (ADS)
Dey, Pranab Kumar
2016-06-01
The on-going research and the development projects with Large Hadron Collider at CERN, Geneva, Switzerland has generated enormous enthusiasm and interest amongst all to know about the ultimate findings on `God's Particle'. This paper has made an attempt to unfold the power supply requirements and the methodology adopted to provide the stringent demand of such high energy particle accelerators during the initial stages of the search for the ultimate particles. An attempt has also been made to highlight the present status on the requirement of power supplies in some high energy accelerators with a view that, precautionary measures can be drawn during design and development from earlier experience which will be of help for the proposed third generation synchrotron to be installed in India at a huge cost.
Lighting: The Killer App of Village Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-01
This paper looks at lighting systems as the major market for village level power generation. To the consumer it is something which is needed, could come from a much friendlier source, and the issues of affordability, convenience, and reliability are important. To the supplier lighting has an enormous range of potential customers, it opens the opportunity for other services, and even small demand can give big returns. Because the efficiency of the light source is critical to the number of lights which a fixed power supply can drive, it is important to pick the proper type of bulb to usemore » in this system. The paper discusses test results from an array of fluorescent and incadescent lamps, compared with a kerosene lamp. Low wattage fluorescents seem to perform the best.« less
Computational Fact Checking from Knowledge Networks
Ciampaglia, Giovanni Luca; Shiralkar, Prashant; Rocha, Luis M.; Bollen, Johan; Menczer, Filippo; Flammini, Alessandro
2015-01-01
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation. PMID:26083336
The Linux operating system: An introduction
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.
The landscape for epigenetic/epigenomic biomedical resources
Shakya, Kabita; O'Connell, Mary J.; Ruskin, Heather J.
2012-01-01
Recent advances in molecular biology and computational power have seen the biomedical sector enter a new era, with corresponding development of Bioinformatics as a major discipline. Generation of enormous amounts of data has driven the need for more advanced storage solutions and shared access through a range of public repositories. The number of such biomedical resources is increasing constantly and mining these large and diverse data sets continues to present real challenges. This paper attempts a general overview of currently available resources, together with remarks on their data mining and analysis capabilities. Of interest here is the recent shift in focus from genetic to epigenetic/epigenomic research and the emergence and extension of resource provision to support this both at local and global scale. Biomedical text and numerical data mining are both considered, the first dealing with automated methods for analyzing research content and information extraction, and the second (broadly) with pattern recognition and prediction. Any summary and selection of resources is inherently limited, given the spectrum available, but the aim is to provide a guideline for the assessment and comparison of currently available provision, particularly as this relates to epigenetics/epigenomics. PMID:22874136
Latif, Rabia; Abbas, Haider; Assar, Saïd
2014-11-01
Wireless Body Area Networks (WBANs) have emerged as a promising technology that has shown enormous potential in improving the quality of healthcare, and has thus found a broad range of medical applications from ubiquitous health monitoring to emergency medical response systems. The huge amount of highly sensitive data collected and generated by WBAN nodes requires an ascendable and secure storage and processing infrastructure. Given the limited resources of WBAN nodes for storage and processing, the integration of WBANs and cloud computing may provide a powerful solution. However, despite the benefits of cloud-assisted WBAN, several security issues and challenges remain. Among these, data availability is the most nagging security issue. The most serious threat to data availability is a distributed denial of service (DDoS) attack that directly affects the all-time availability of a patient's data. The existing solutions for standalone WBANs and sensor networks are not applicable in the cloud. The purpose of this review paper is to identify the most threatening types of DDoS attacks affecting the availability of a cloud-assisted WBAN and review the state-of-the-art detection mechanisms for the identified DDoS attacks.
The simultaneous evolution of author and paper networks
Börner, Katy; Maru, Jeegar T.; Goldstone, Robert L.
2004-01-01
There has been a long history of research into the structure and evolution of mankind's scientific endeavor. However, recent progress in applying the tools of science to understand science itself has been unprecedented because only recently has there been access to high-volume and high-quality data sets of scientific output (e.g., publications, patents, grants) and computers and algorithms capable of handling this enormous stream of data. This article reviews major work on models that aim to capture and recreate the structure and dynamics of scientific evolution. We then introduce a general process model that simultaneously grows coauthor and paper citation networks. The statistical and dynamic properties of the networks generated by this model are validated against a 20-year data set of articles published in PNAS. Systematic deviations from a power law distribution of citations to papers are well fit by a model that incorporates a partitioning of authors and papers into topics, a bias for authors to cite recent papers, and a tendency for authors to cite papers cited by papers that they have read. In this TARL model (for topics, aging, and recursive linking), the number of topics is linearly related to the clustering coefficient of the simulated paper citation network. PMID:14976254
Andromeda's SMBH Projected Accretion Rate
NASA Astrophysics Data System (ADS)
Wilson, John
2014-03-01
A formula for calculating the half-life of galaxy clusters is proposed. A galactic half-life is the estimated amount of time that the most massive supermassive black hole (SMBH) in the galaxy cluster will have accreted one half of the mass in the cluster. The calculation is based on a projection of the SMBH continuing its exponentially decreasing rate of accretion that it had in its first 13 billion years. The calculated half-life for the Andromeda SMBH is approximately 1.4327e14 years from the Big Bang. Several proposals have suggested that black holes could be significant factors in the formation of new universes. Part of the verification or falsification of this hypothesis could be done by an N-body simulation. These simulations require an enormous amount of computer power and time. Some plausible projection of the growth of the supermassive black hole is needed to prepare an N-body simulation budget proposal. For now, this method provides an estimate for the growth rate of the Andromeda SMBH and deposition of the outcome of most of the galaxy cluster's mass which is either accreted by the SMBH, lost by ejection from the cluster, or lost in the form of energy.
Creation of the BMA ensemble for SST using a parallel processing technique
NASA Astrophysics Data System (ADS)
Kim, Kwangjin; Lee, Yang Won
2013-10-01
Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.
Lessons from wind policy in Portugal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peña, Ivonne; L. Azevedo, Inês; Marcelino Ferreira, Luís António Fialho
Wind capacity and generation grew rapidly in several European countries, such as Portugal. Wind power adoption in Portugal began in the early 2000s, incentivized by a continuous feed-in tariff policy mechanism, coupled with public tenders for connection licenses in 2001, 2002, and 2005. These policies led to an enormous success in terms of having a large share of renewables providing electricity services: wind alone accounts today for ~23.5% of electricity demand in Portugal. We explain the reasons wind power became a key part of Portugal's strategy to comply with European Commission climate and energy goals, and provide a detailed reviewmore » of the wind feed-in tariff mechanism. We describe the actors involved in wind power production growth. We estimate the environmental and energy dependency gains achieved through wind power generation, and highlight the correlation between wind electricity generation and electricity exports. Finally, we compare the Portuguese wind policies with others countries' policy designs and discuss the relevance of a feed-in tariff reform for subsequent wind power additions.« less
WFIRST Observatory Performance
NASA Technical Reports Server (NTRS)
Kruk, Jeffrey W.
2012-01-01
The WFIRST observatory will be a powerful and flexible wide-field near-infrared facility. The planned surveys will provide data applicable to an enormous variety of astrophysical science. This presentation will provide a description of the observatory and its performance characteristics. This will include a discussion of the point spread function, signal-to-noise budgets for representative observing scenarios and the corresponding limiting sensitivity. Emphasis will be given to providing prospective Guest Observers with information needed to begin thinking about new observing programs.
LOx/LCH4: A Unifying Technology for Future Exploration
NASA Technical Reports Server (NTRS)
Banker, Brian; Ryan, Abigail
2014-01-01
OVERVIEW For every pound of payload landed on Mars, 226 pounds are required on Earth to get it there. Due to this enormous mass gear-ratio, increasing commonality between lander subsystems, such as power, propulsion, and life support, results in tremendous launch mass and cost savings. Human-Mars architectures point to an oxygen-methane economy, utilizing common commodities scavenged from the planetary atmosphere and soil via In-Situ Resource Utilization (ISRU) and common commodity tankage across sub-systems.
User-Computer Interactions: Some Problems for Human Factors Research
1981-09-01
accessibility from the work place or home of R. information stored in major repositories. o Two-way real-time communication between broadcasting - facilities...Miller, and R.W. Pew (BBN Inc.) MDA 903-80-C-0551 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS...average U.S. home has gone from about 10 in 1940 to about 100 in 1960 to a few thousand in 1930. Collectively, these trends represent an enormous
Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.
Profiling an application for power consumption during execution on a compute node
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-09-17
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
NASA Astrophysics Data System (ADS)
Singh, Sukhdeep; Singh, Janpreet; Tripathi, S. K.
2018-05-01
Bismuth antimony telluride (Bi-Sb-Te) compounds have been investigated for the past many decades for thermoelectric (TE) power generation and cooling purpose. We synthesized this compound with a stoichiometry Bi1.2Sb0.8Te3 through melt cool technique and thin films of as synthesized material were deposited by thermal evaporation. The prime focus of the present work is to study the influence of annealing temperature on the room temperature (RT) power factor of thin films. Electrical conductivity and Seebeck coefficient were studied and power factors were calculated which showed a peak value at 323 K. The compounds performance is comparable to some very efficient Bi-Sb-Te reported stoichiometries at RT scale. The values observed show that material has an enormous potential for energy production at ambient temperature scales.
Time Resolved Spectroscopy, High Sensitivity Power Spectrum & a Search for the X-Ray QPO in NGC 5548
NASA Astrophysics Data System (ADS)
Yaqoob, Tahir
1999-09-01
Controversy surrounds the EXOSAT discovery of a QPO (period ~500 s) in NGC 5548 due to the data being plagued by high background and instrumental systematics. If the NGC 5548 QPO is real, the implications for the physics of the X-ray emission mechanism and inner-most disk/black-hole system are enormous. AXAF provides the first opportunity to settle the issue, capable of yielding power spectra with unprecedented sensitivity, pushing the limit on finding new features. Using HETG/ACIS we will also perform time-resolved spectroscopy of the ionized absorption features and Fe-K emission line, search for energy-dependent time lags in the continuum, between the continuum and spectral features, and between the spectral features. These data will provide powerful constraints on models of AGN.
Profiling an application for power consumption during execution on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2012-08-21
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
Richard Feynman and computation
NASA Astrophysics Data System (ADS)
Hey, Tony
1999-04-01
The enormous contribution of Richard Feynman to modern physics is well known, both to teaching through his famous Feynman Lectures on Physics, and to research with his Feynman diagram approach to quantum field theory and his path integral formulation of quantum mechanics. Less well known perhaps is his long-standing interest in the physics of computation and this is the subject of this paper. Feynman lectured on computation at Caltech for most of the last decade of his life, first with John Hopfield and Carver Mead, and then with Gerry Sussman. The story of how these lectures came to be written up as the Feynman Lectures on Computation is briefly recounted. Feynman also discussed the fundamentals of computation with other legendary figures of the computer science and physics community such as Ed Fredkin, Rolf Landauer, Carver Mead, Marvin Minsky and John Wheeler. He was also instrumental in stimulating developments in both nanotechnology and quantum computing. During the 1980s Feynman re-visited long-standing interests both in parallel computing with Geoffrey Fox and Danny Hillis, and in reversible computation and quantum computing with Charles Bennett, Norman Margolus, Tom Toffoli and Wojciech Zurek. This paper records Feynman's links with the computational community and includes some reminiscences about his involvement with the fundamentals of computing.
Hybrid integrated biological-solid-state system powered with adenosine triphosphate.
Roseman, Jared M; Lin, Jianxun; Ramakrishnan, Siddharth; Rosenstein, Jacob K; Shepard, Kenneth L
2015-12-07
There is enormous potential in combining the capabilities of the biological and the solid state to create hybrid engineered systems. While there have been recent efforts to harness power from naturally occurring potentials in living systems in plants and animals to power complementary metal-oxide-semiconductor integrated circuits, here we report the first successful effort to isolate the energetics of an electrogenic ion pump in an engineered in vitro environment to power such an artificial system. An integrated circuit is powered by adenosine triphosphate through the action of Na(+)/K(+) adenosine triphosphatases in an integrated in vitro lipid bilayer membrane. The ion pumps (active in the membrane at numbers exceeding 2 × 10(6) mm(-2)) are able to sustain a short-circuit current of 32.6 pA mm(-2) and an open-circuit voltage of 78 mV, providing for a maximum power transfer of 1.27 pW mm(-2) from a single bilayer. Two series-stacked bilayers provide a voltage sufficient to operate an integrated circuit with a conversion efficiency of chemical to electrical energy of 14.9%.
The use of dual mode thermionic reactors in supporting Earth orbital and space exploration missions
NASA Astrophysics Data System (ADS)
Zubrin, Robert M.; Sulmeisters, Tal K.
1993-01-01
Missions requiring large amounts of electric power to support their payload functions can be enabled through the employment of nuclear electric power reactors, which in some cases can also assist the mission by making possible the employment of high specific impulse electric propulsion. However it is found that the practicality and versality of using a power reactor to provide advanced propulsion is enormously enhanced if the reactor is configured in such a way to allow it to generate a certain amount of direct thrust as well. The use of such a system allows the creation of a common bus upper stage that can provide both high power and high impulse (with short orbit transfer times). It is shown that such a system, termed an Integral Power and Propulsion Stage (IPAPS), is optimal for supporting many Earth, Lunar, planetary and asteroidal observation, exploration, and communication support missions, and it is therefore recommended that the nuclear power reactor ultimately selected by the government for development and production be one that can be configured for such a function.
Esquinas-Alcázar, José
2005-12-01
Crop genetic diversity - which is crucial for feeding humanity, for the environment and for sustainable development - is being lost at an alarming rate. Given the enormous interdependence of countries and generations on this genetic diversity, this loss raises critical socio-economic, ethical and political questions. The recent ratification of a binding international treaty, and the development of powerful new technologies to conserve and use resources more effectively, have raised expectations that must now be fulfilled.
Mathematics and the Internet: A Source of Enormous Confusion and Great Potential
2009-05-01
free Internet Myth The story recounted below of the scale-free nature of the Internet seems convincing, sound, and al- most too good to be true ...models. In fact, much of the initial excitement in the nascent field of network science can be attributed to an ear- ly and appealingly simple class...this new class of networks, com- monly referred to as scale-free networks. The term scale-free derives from the simple observation that power-law node
Council of War: A History of the Joint Chiefs of Staff, 1942-1991
2012-07-01
PLANNING nuclear fission could produce enormous explosive power. Among those alarmed by the German breakthrough were Leo Szilard, a Hungarian expatriate...Letter, Roosevelt to Einstein, October 19, 1939, Safe File, PSF, Roosevelt Library. See also Leo Szilard,"Reminiscences," in Perspec- tives in American...i960, loc . cit. 85 Michael A. Palmer, Guardians of the Gulf: A History of America’s Expanding Role in the Persian Gulf, 1833-IQ92 (New York: Free
Model based verification of the Secure Socket Layer (SSL) Protocol for NASA systems
NASA Technical Reports Server (NTRS)
Powell, John D.; Gilliam, David
2004-01-01
The National Aeronautics and Space Administration (NASA) has tens of thousands of networked computer systems and applications. Software Security vulnerabilities present risks such as lost or corrupted data, information theft, and unavailability of critical systems. These risks represent potentially enormous costs to NASA. The NASA Code Q research initiative 'Reducing Software Security Risk (RSSR) Trough an Integrated Approach' offers formal verification of information technology (IT), through the creation of a Software Security Assessment Instrument (SSAI), to address software security risks.
TACITUS: Text Understanding for Strategic Computing
1990-11-01
using TACITUS for the OPREPs was like driving a Porsche in America. Moreover, an enormous amount of time had to be spent in taking care of very minor...characters to which [a] can correspond that does not include [b]. A comparison of each system’s third type of rule involves compostion of rules and is the...production. For a significant number of examples (34), it did not matter where the attachment was made. For instance, in John made coffee for Mary
1988-02-28
enormous investment in software. This is an area extremely important objective. We need additional where better methodologies , tools and theories...microscopy (SEM) and optical mi- [131 Hanson, A., et a. "A Methodology for the Develop- croscopy. Current activities include the study of SEM im- ment...through a phased knowledge engineering methodology Center (ARC) and NASA Johnson Space Center consisting of: prototype knowledge base develop- iJSC
Neural network wavelet technology: A frontier of automation
NASA Technical Reports Server (NTRS)
Szu, Harold
1994-01-01
Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.
Communications satellite system for Africa
NASA Astrophysics Data System (ADS)
Kriegl, W.; Laufenberg, W.
1980-09-01
Earlier established requirement estimations were improved upon by contacting African administrations and organizations. An enormous demand is shown to exist for telephony and teletype services in rural areas. It is shown that educational television broadcasting should be realized in the current African transport and communications decade (1978-1987). Radio broadcasting is proposed in order to overcome illiteracy and to improve educational levels. The technical and commercial feasibility of the system is provided by computer simulations which demonstrate how the required objectives can be fulfilled in conjunction with ground networks.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-10
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-06-05
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.
Emerging Applications of Liquid Crystals Based on Nanotechnology
Sohn, Jung Inn; Hong, Woong-Ki; Choi, Su Seok; Coles, Harry J.; Welland, Mark E.; Cha, Seung Nam; Kim, Jong Min
2014-01-01
Diverse functionalities of liquid crystals (LCs) offer enormous opportunities for their potential use in advanced mobile and smart displays, as well as novel non-display applications. Here, we present snapshots of the research carried out on emerging applications of LCs ranging from electronics to holography and self-powered systems. In addition, we will show our recent results focused on the development of new LC applications, such as programmable transistors, a transparent and active-type two-dimensional optical array and self-powered display systems based on LCs, and will briefly discuss their novel concepts and basic operating principles. Our research will give insights not only into comprehensively understanding technical and scientific applications of LCs, but also developing new discoveries of other LC-based devices. PMID:28788555
Policy without politics: the limits of social engineering.
Navarro, Vicente
2003-01-01
The extent of coverage provided by a country's health services is directly related to the level of development of that country's democratic process (and its power relations). The United States is the only developed country whose government does not guarantee access to health care for its citizens. It is also the developed country with the least representative and most insufficient democratic institutions, owing to the constitutional framework of the political system, the privatization of the electoral process, and the enormous power of corporate interests in both the media and the political process. As international experience shows, without a strong labor-based movement willing to be radical in its protests, a universal health care program will never be accepted by the US establishment.
Policy Without Politics: The Limits of Social Engineering
Navarro, Vicente
2003-01-01
The extent of coverage provided by a country’s health services is directly related to the level of development of that country’s democratic process (and its power relations). The United States is the only developed country whose government does not guarantee access to health care for its citizens. It is also the developed country with the least representative and most insufficient democratic institutions, owing to the constitutional framework of the political system, the privatization of the electoral process, and the enormous power of corporate interests in both the media and the political process. As international experience shows, without a strong labor-based movement willing to be radical in its protests, a universal health care program will never be accepted by the US establishment. PMID:12511388
System design and implementation of digital-image processing using computational grids
NASA Astrophysics Data System (ADS)
Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping
2005-06-01
As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.
SUPERNOVAE POWERED BY MAGNETARS THAT TRANSFORM INTO BLACK HOLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriya, Takashi J.; Metzger, Brian D.; Blinnikov, Sergei I., E-mail: takashi.moriya@nao.ac.jp
2016-12-10
Rapidly rotating, strongly magnetized neutron stars (NSs; magnetars) can release their enormous rotational energy via magnetic spin-down, providing a power source for bright transients such as superluminous supernovae (SNe). On the other hand, particularly massive (so-called supramassive) NSs require a minimum rotation rate to support their mass against gravitational collapse, below which the NS collapses to a black hole (BH). We model the light curves (LCs) of SNe powered with magnetars that transform into BHs. Although the peak luminosities can reach high values in the range of superluminous SNe, their post maximum LCs can decline very rapidly because of the suddenmore » loss of the central energy input. Early BH transformation also enhances the shock breakout signal from the magnetar-driven bubble relative to the main SN peak. Our synthetic LCs of SNe powered by magnetars transforming to BHs are consistent with those of some rapidly evolving bright transients recently reported by Arcavi et al.« less
NASA Astrophysics Data System (ADS)
Pérez-Tomás, Amador; Chikoidze, Ekaterine; Jennings, Michael R.; Russell, Stephen A. O.; Teherani, Ferechteh H.; Bove, Philippe; Sandana, Eric V.; Rogers, David J.
2018-03-01
Oxides represent the largest family of wide bandgap (WBG) semiconductors and also offer a huge potential range of complementary magnetic and electronic properties, such as ferromagnetism, ferroelectricity, antiferroelectricity and high-temperature superconductivity. Here, we review our integration of WBG and ultra WBG semiconductor oxides into different solar cells architectures where they have the role of transparent conductive electrodes and/or barriers bringing unique functionalities into the structure such above bandgap voltages or switchable interfaces. We also give an overview of the state-of-the-art and perspectives for the emerging semiconductor β- Ga2O3, which is widely forecast to herald the next generation of power electronic converters because of the combination of an UWBG with the capacity to conduct electricity. This opens unprecedented possibilities for the monolithic integration in solar cells of both self-powered logic and power electronics functionalities. Therefore, WBG and UWBG oxides have enormous promise to become key enabling technologies for the zero emissions smart integration of the internet of things.
Warner, Robin F
2012-08-15
The generation of electricity through hydropower can, along with other anthropogenic activities, degrade river hydromorphology and ecosystems. In this case, water for power generation is diverted from the River Durance to a canal, which services a chain of 17 power stations, with the lower three being in the catchment of the Etang de Berre. This means that excess water and sediments are discharged into the salt-water lagoon with enormous consequences for ecosystems there. This paper summarizes the impacts of HEP and other human activities on both the river and lagoonal systems. It also considers agency and government attempts to understand and counter the degradation of these systems, both to date and in the future, with the latter catering for the potential impacts of future human development and global warming. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ohnishi, Takeo
2012-01-01
On March 11, 2011 eastern Japan was struck by a magnitude 9.0 earthquake and an enormous tsunami, over 13 m in height, which together killed over 20,500 people and resulted in the evacuation of over 320,000 people from the devastated areas. This paper describes the damage sustained by the Fukushima-Daiichi nuclear power plant during this unpredicted major natural disaster and the events that happened in the months after this accident. The events occurring at the Fukushima-Daiichi nuclear power plant, the actions taken to minimize the effects of the damage to the plant and to protect the public, and the points at which the responses proved to be inadequate all offer lessons that will be of value to those planning for and responding to future natural disasters and accidents in Japan and around the world.
NASA Astrophysics Data System (ADS)
Hu, Xiaosong; Martinez, Clara Marina; Yang, Yalian
2017-03-01
Holistic energy management of plug-in hybrid electric vehicles (PHEVs) in smart grid environment constitutes an enormous control challenge. This paper responds to this challenge by investigating the interactions among three important control tasks, i.e., charging, on-road power management, and battery degradation mitigation, in PHEVs. Three notable original contributions distinguish our work from existing endeavors. First, a new convex programming (CP)-based cost-optimal control framework is constructed to minimize the daily operational expense of a PHEV, which seamlessly integrates costs of the three tasks. Second, a straightforward but useful sensitivity assessment of the optimization outcome is executed with respect to price changes of battery and energy carriers. The potential impact of vehicle-to-grid (V2G) power flow on the PHEV economy is eventually analyzed through a multitude of comparative studies.
Sales, B B; Saakes, M; Post, J W; Buisman, C J N; Biesheuvel, P M; Hamelers, H V M
2010-07-15
The entropy increase of mixing two solutions of different salt concentrations can be harnessed to generate electrical energy. Worldwide, the potential of this resource, the controlled mixing of river and seawater, is enormous, but existing conversion technologies are still complex and expensive. Here we present a small-scale device that directly generates electrical power from the sequential flow of fresh and saline water, without the need for auxiliary processes or converters. The device consists of a sandwich of porous "supercapacitor" electrodes, ion-exchange membranes, and a spacer and can be further miniaturized or scaled-out. Our results demonstrate that alternating the flow of saline and fresh water through a capacitive cell allows direct autogeneration of voltage and current and consequently leads to power generation. Theoretical calculations aid in providing directions for further optimization of the properties of membranes and electrodes.
NASA Astrophysics Data System (ADS)
Shostak, Seth
2011-02-01
While modern SETI experiments are often highly sensitive, reaching detection limits of 10 -25 W/m 2 Hz in the radio, interstellar distances imply that if extraterrestrial societies are using isotropic or broad-beamed transmitters, the power requirements for their emissions are enormous. Indeed, isotropic transmissions to the entire Galaxy, sufficiently intense to be detectable by our current searches, would consume power comparable to the stellar insolation of an Earth-size planet. In this paper we consider how knowledge can be traded for power, and how, and to what degree, astronomical accuracy can reduce the energy costs of a comprehensive transmission program by putative extraterrestrials. Indeed, an exploration of how far this trade-off might be taken suggests that extraterrestrial transmitting strategies of civilizations only modestly more advanced than our own would be, as are our SETI receiving experiments, inexpensive enough to allow multiple efforts. We explore the consequences this supposition has for our SETI listening experiments.
First-principles modeling of biological systems and structure-based drug-design.
Sgrignani, Jacopo; Magistrato, Alessandra
2013-03-01
Molecular modeling techniques play a relevant role in drug design providing detailed information at atomistic level on the structural, dynamical, mechanistic and electronic properties of biological systems involved in diseases' onset, integrating and supporting commonly used experimental approaches. These information are often not accessible to the experimental techniques taken singularly, but are of crucial importance for drug design. Due to the enormous increase of the computer power in the last decades, quantum mechanical (QM) or first-principles-based methods have become often used to address biological issues of pharmaceutical relevance, providing relevant information for drug design. Due to their complexity and their size, biological systems are often investigated by means of a mixed quantum-classical (QM/MM) approach, which treats at an accurate QM level a limited chemically relevant portion of the system and at the molecular mechanics (MM) level the remaining of the biomolecule and its environment. This method provides a good compromise between computational cost and accuracy, allowing to characterize the properties of the biological system and the (free) energy landscape of the process in study with the accuracy of a QM description. In this review, after a brief introduction of QM and QM/MM methods, we will discuss few representative examples, taken from our work, of the application of these methods in the study of metallo-enzymes of pharmaceutical interest, of metal-containing anticancer drugs targeting the DNA as well as of neurodegenerative diseases. The information obtained from these studies may provide the basis for a rationale structure-based drug design of new and more efficient inhibitors or drugs.
Genome-wide detection of intervals of genetic heterogeneity associated with complex traits
Llinares-López, Felipe; Grimm, Dominik G.; Bodenham, Dean A.; Gieraths, Udo; Sugiyama, Mahito; Rowan, Beth; Borgwardt, Karsten
2015-01-01
Motivation: Genetic heterogeneity, the fact that several sequence variants give rise to the same phenotype, is a phenomenon that is of the utmost interest in the analysis of complex phenotypes. Current approaches for finding regions in the genome that exhibit genetic heterogeneity suffer from at least one of two shortcomings: (i) they require the definition of an exact interval in the genome that is to be tested for genetic heterogeneity, potentially missing intervals of high relevance, or (ii) they suffer from an enormous multiple hypothesis testing problem due to the large number of potential candidate intervals being tested, which results in either many false positives or a lack of power to detect true intervals. Results: Here, we present an approach that overcomes both problems: it allows one to automatically find all contiguous sequences of single nucleotide polymorphisms in the genome that are jointly associated with the phenotype. It also solves both the inherent computational efficiency problem and the statistical problem of multiple hypothesis testing, which are both caused by the huge number of candidate intervals. We demonstrate on Arabidopsis thaliana genome-wide association study data that our approach can discover regions that exhibit genetic heterogeneity and would be missed by single-locus mapping. Conclusions: Our novel approach can contribute to the genome-wide discovery of intervals that are involved in the genetic heterogeneity underlying complex phenotypes. Availability and implementation: The code can be obtained at: http://www.bsse.ethz.ch/mlcb/research/bioinformatics-and-computational-biology/sis.html. Contact: felipe.llinares@bsse.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072488
Computational modeling in cognitive science: a manifesto for change.
Addyman, Caspar; French, Robert M
2012-07-01
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. Copyright © 2012 Cognitive Science Society, Inc.
Microtube strip heat exchanger
NASA Astrophysics Data System (ADS)
Doty, F. D.
1992-07-01
The purpose of this contract has been to explore the limits of miniaturization of heat exchangers with the goals of (1) improving the theoretical understanding of laminar heat exchangers, (2) evaluating various manufacturing difficulties, and (3) identifying major applications for the technology. A low-cost, ultra-compact heat exchanger could have an enormous impact on industry in the areas of cryocoolers and energy conversion. Compact cryocoolers based on the reverse Brayton cycle (RBC) would become practical with the availability of compact heat exchangers. Many experts believe that hardware advances in personal computer technology will rapidly slow down in four to six years unless lowcost, portable cryocoolers suitable for the desktop supercomputer can be developed. Compact refrigeration systems would permit dramatic advances in high-performance computer work stations with 'conventional' microprocessors operating at 150 K, and especially with low-cost cryocoolers below 77 K. NASA has also expressed strong interest in our MTS exchanger for space-based RBC cryocoolers for sensor cooling. We have demonstrated feasibility of higher specific conductance by a factor of five than any other work in high-temperature gas-to-gas exchangers. These laminar-flow, microtube exchangers exhibit extremely low pressure drop compared to alternative compact designs under similar conditions because of their much shorter flow length and larger total flow area for lower flow velocities. The design appears to be amenable to mass production techniques, but considerable process development remains. The reduction in materials usage and the improved heat exchanger performance promise to be of enormous significance in advanced engine designs and in cryogenics.
Daily monitoring of the land surface of the Earth
NASA Astrophysics Data System (ADS)
Mascaro, J.
2016-12-01
Planet is an integrated aerospace and data analytics company that operates the largest fleet of Earth-imaging satellites. With more than 140 cube-sats successfully launched to date, Planet is now collecting approximately 10 million square kilometers of imagery per day (3-5m per pixel, in red, green, blue and near infrared spectral bands). By early 2017, Planet's constellation will image the entire land surface of the Earth on a daily basis. Due to investments in cloud storage and computing, approximately 75% of imagery collected is available to Planet's partners within 24 hours of capture through an Application Program Interface. This unique dataset has enormous applications for monitoring the status of Earth's natural ecosystems, as well as human settlements and agricultural welfare. Through our Ambassadors Program, Planet has made data available for researchers in areas as disparate as human rights monitoring in refugee camps, to assessments of the impact of hydroelectric installations, to tracking illegal gold mining in Amazon forests, to assessing the status of the cryosphere. Here, we share early results from Planet's research partner network, including enhanced spatial and temporal resolution of NDVI data for agricultural health in Saudi Arabia, computation of rates of illegal deforestation in Southern Peru, estimates of tropical forest carbon stocks based on data integration with active sensors, and estimates of glacial flow rates. We synthesize the potentially enormous research and scientific value of Planet's persistent monitoring capability, and discuss methods by which the data will be disseminated into the scientific community.
Improving the clinical impact of biomaterials in cancer immunotherapy
Gammon, Joshua M.; Dold, Neil M.; Jewell, Christopher M.
2016-01-01
Immunotherapies for cancer have progressed enormously over the past few decades, and hold great promise for the future. The successes of these therapies, with some patients showing durable and complete remission, demonstrate the power of harnessing the immune system to eradicate tumors. However, the effectiveness of current immunotherapies is limited by hurdles ranging from immunosuppressive strategies employed by tumors, to inadequate specificity of existing therapies, to heterogeneity of disease. Further, the vast majority of approved immunotherapies employ systemic delivery of immunomodulators or cells that make addressing some of these challenges more difficult. Natural and synthetic biomaterials–such as biocompatible polymers, self-assembled lipid particles, and implantable biodegradable devices–offer unique potential to address these hurdles by harnessing the benefits of therapeutic targeting, tissue engineering, co-delivery, controlled release, and sensing. However, despite the enormous investment in new materials and nanotechnology, translation of these ideas to the clinic is still an uncommon outcome. Here we review the major challenges facing immunotherapies and discuss how the newest biomaterials and nanotechnologies could help overcome these challenges to create new clinical options for patients. PMID:26871948
Vacuum low-temperature superconductivity is the essence of superconductivity - Atomic New Theory
NASA Astrophysics Data System (ADS)
Yongquan, Han
2010-10-01
The universe when the temperature closest to the Big Bang the temperature should be nuclear. Because, after the big bang, instant formation of atoms, nuclei and electrons between the absolute vacuum, the nucleus can not emit energy. (Radioactive elements, except in fact, radiation Yuan Su limited power emitted) which causes atomic nuclei and external temperature difference are so enormous that a large temperature difference reasons, all external particles became closer to the nucleus, affect the motion of electrons. When the conductor conductivity and thus affect the conductivity, the formation of resistance. Assumption that no particles affect the motion of electrons (except outside the nucleus) to form a potential difference will not change after the vector form, is now talking about the phenomenon of superconductivity, and then to introduce general, the gap between atoms in molecules or between small, valence electron number of high temperature superconducting conductors. This theory of atomic nuclei, but also explain the atomic and hydrogen bombs can remain after an explosion Why can release enormous energy reasons. Can also explain the ``super flow'' phenomenon. natural world. Tel 13241375685
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds
NASA Astrophysics Data System (ADS)
Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol
The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.
Li, Kenli; Zou, Shuting; Xv, Jin
2008-01-01
Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2(n)), n in Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2(n)) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations.
Li, Kenli; Zou, Shuting; Xv, Jin
2008-01-01
Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2n), n ∈ Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2n) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations. PMID:18431451
Asif, Muhammad; Guo, Xiangzhou; Zhang, Jing; Miao, Jungang
2018-04-17
Digital cross-correlation is central to many applications including but not limited to Digital Image Processing, Satellite Navigation and Remote Sensing. With recent advancements in digital technology, the computational demands of such applications have increased enormously. In this paper we are presenting a high throughput digital cross correlator, capable of processing 1-bit digitized stream, at the rate of up to 2 GHz, simultaneously on 64 channels i.e., approximately 4 Trillion correlation and accumulation operations per second. In order to achieve higher throughput, we have focused on frequency based partitioning of our design and tried to minimize and localize high frequency operations. This correlator is designed for a Passive Millimeter Wave Imager intended for the detection of contraband items concealed on human body. The goals are to increase the system bandwidth, achieve video rate imaging, improve sensitivity and reduce the size. Design methodology is detailed in subsequent sections, elaborating the techniques enabling high throughput. The design is verified for Xilinx Kintex UltraScale device in simulation and the implementation results are given in terms of device utilization and power consumption estimates. Our results show considerable improvements in throughput as compared to our baseline design, while the correlator successfully meets the functional requirements.
openBIS: a flexible framework for managing and analyzing complex data in biology research
2011-01-01
Background Modern data generation techniques used in distributed systems biology research projects often create datasets of enormous size and diversity. We argue that in order to overcome the challenge of managing those large quantitative datasets and maximise the biological information extracted from them, a sound information system is required. Ease of integration with data analysis pipelines and other computational tools is a key requirement for it. Results We have developed openBIS, an open source software framework for constructing user-friendly, scalable and powerful information systems for data and metadata acquired in biological experiments. openBIS enables users to collect, integrate, share, publish data and to connect to data processing pipelines. This framework can be extended and has been customized for different data types acquired by a range of technologies. Conclusions openBIS is currently being used by several SystemsX.ch and EU projects applying mass spectrometric measurements of metabolites and proteins, High Content Screening, or Next Generation Sequencing technologies. The attributes that make it interesting to a large research community involved in systems biology projects include versatility, simplicity in deployment, scalability to very large data, flexibility to handle any biological data type and extensibility to the needs of any research domain. PMID:22151573
Assurance Technology Challenges of Advanced Space Systems
NASA Technical Reports Server (NTRS)
Chern, E. James
2004-01-01
The initiative to explore space and extend a human presence across our solar system to revisit the moon and Mars post enormous technological challenges to the nation's space agency and aerospace industry. Key areas of technology development needs to enable the endeavor include advanced materials, structures and mechanisms; micro/nano sensors and detectors; power generation, storage and management; advanced thermal and cryogenic control; guidance, navigation and control; command and data handling; advanced propulsion; advanced communication; on-board processing; advanced information technology systems; modular and reconfigurable systems; precision formation flying; solar sails; distributed observing systems; space robotics; and etc. Quality assurance concerns such as functional performance, structural integrity, radiation tolerance, health monitoring, diagnosis, maintenance, calibration, and initialization can affect the performance of systems and subsystems. It is thus imperative to employ innovative nondestructive evaluation methodologies to ensure quality and integrity of advanced space systems. Advancements in integrated multi-functional sensor systems, autonomous inspection approaches, distributed embedded sensors, roaming inspectors, and shape adaptive sensors are sought. Concepts in computational models for signal processing and data interpretation to establish quantitative characterization and event determination are also of interest. Prospective evaluation technologies include ultrasonics, laser ultrasonics, optics and fiber optics, shearography, video optics and metrology, thermography, electromagnetics, acoustic emission, x-ray, data management, biomimetics, and nano-scale sensing approaches for structural health monitoring.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-02-05
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D
2012-10-23
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Implementing renewable energy in Cuba
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawthorne, W.; Stone, L.; Perez, V.B.H.
Since the collapse of the Soviet Union and the tightening of the US embargo, Cuba has found itself in an energy crisis of enormous magnitude. Faced with this energy crisis and its ensuing black-outs and productivity reductions, Cuba has developed a national energy plan which focuses on energy self-sufficiency and sustainability. Energy efficiency, solar energy, wind power, micro-hydro, and biomass are each included in the plan. Implementation of renewable energy projects in each of these areas has begun throughout the country with the enthusiastic support of Cubasolar, the Cuban renewable energy professional association.
Enormous Disc of Cool Gas Surrounding the Nearby Powerful Radio Galaxy NGC 612 (PKS 0131-36)
2008-05-22
galaxies in clus- ters appear to be much more devoid of H I gas, as sug- gested by a recent H I survey of the VIRGO cluster by di Serego Alighieri et...120th Street, New York, N.Y. 10027, USA 2Netherlands Foundation for Research in Astronomy, Postbus 2, 7990 AA Dwingeloo, the Netherlands 3Kapteyn...NGC 612. This paper is part of an ongoing study to map the large-scale neutral hydrogen properties of nearby radio galaxies and it presents the first
Interactive models of communication at the nanoscale using nanoparticles that talk to one another
Llopis-Lorente, Antoni; Díez, Paula; Sánchez, Alfredo; Marcos, María D.; Sancenón, Félix; Martínez-Ruiz, Paloma; Villalonga, Reynaldo; Martínez-Máñez, Ramón
2017-01-01
‘Communication' between abiotic nanoscale chemical systems is an almost-unexplored field with enormous potential. Here we show the design and preparation of a chemical communication system based on enzyme-powered Janus nanoparticles, which mimics an interactive model of communication. Cargo delivery from one nanoparticle is governed by the biunivocal communication with another nanoparticle, which involves two enzymatic processes and the interchange of chemical messengers. The conceptual idea of establishing communication between nanodevices opens the opportunity to develop complex nanoscale systems capable of sharing information and cooperating. PMID:28556828
Schlumberger, Martin; Le Guen, Bernard
2012-01-01
Following the Chernobyl accident, enormous amounts of radioisotopes were released in the atmosphere and have contaminated surrounding populations in the absence of rapid protective countermeasures. The highest radiation doses were delivered to the thyroid gland, and the only direct consequence of radiation exposure observed among contaminated population is the increased incidence of thyroid cancers among subjects who were children in 1986 and who lived at that time in Belarus, Ukraine or Russia. © 2012 médecine/sciences – Inserm / SRMS.
Earthdata Cloud Analytics Project
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Lynnes, Chris
2018-01-01
This presentation describes a nascent project in NASA to develop a framework to support end-user analytics of NASA's Earth science data in the cloud. The chief benefit of migrating EOSDIS (Earth Observation System Data and Information Systems) data to the cloud is to position the data next to enormous computing capacity to allow end users to process data at scale. The Earthdata Cloud Analytics project will user a service-based approach to facilitate the infusion of evolving analytics technology and the integration with non-NASA analytics or other complementary functionality at other agencies and in other nations.
NASA Technical Reports Server (NTRS)
1972-01-01
The growth of common as well as emerging visual display technologies are surveyed. The major inference is that contemporary society is rapidly growing evermore reliant on visual display for a variety of purposes. Because of its unique mission requirements, the National Aeronautics and Space Administration has contributed in an important and specific way to the growth of visual display technology. These contributions are characterized by the use of computer-driven visual displays to provide an enormous amount of information concisely, rapidly and accurately.
ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.
Kim, Seongho
2015-11-01
Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; Ramprasad, R.; Gubernatis, J. E.; Lookman, T.
2016-01-01
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the most crucial and relevant predictors. The developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance. PMID:26783247
A Haptic-Enhanced System for Molecular Sensing
NASA Astrophysics Data System (ADS)
Comai, Sara; Mazza, Davide
The science of haptics has received an enormous attention in the last decade. One of the major application trends of haptics technology is data visualization and training. In this paper, we present a haptically-enhanced system for manipulation and tactile exploration of molecules.The geometrical models of molecules is extracted either from theoretical or empirical data using file formats widely adopted in chemical and biological fields. The addition of information computed with computational chemistry tools, allows users to feel the interaction forces between an explored molecule and a charge associated to the haptic device, and to visualize a huge amount of numerical data in a more comprehensible way. The developed tool can be used either for teaching or research purposes due to its high reliance on both theoretical and experimental data.
Cryptography and the Internet: lessons and challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCurley, K.S.
1996-12-31
The popularization of the Internet has brought fundamental changes to the world, because it allows a universal method of communication between computers. This carries enormous benefits with it, but also raises many security considerations. Cryptography is a fundamental technology used to provide security of computer networks, and there is currently a widespread engineering effort to incorporate cryptography into various aspects of the Internet. The system-level engineering required to provide security services for the Internet carries some important lessons for researchers whose study is focused on narrowly defined problems. It also offers challenges to the cryptographic research community by raising newmore » questions not adequately addressed by the existing body of knowledge. This paper attempts to summarize some of these lessons and challenges for the cryptographic research community.« less
Chandra Data Reveal Rapidly Whirling Black Holes
NASA Astrophysics Data System (ADS)
2008-01-01
A new study using results from NASA's Chandra X-ray Observatory provides one of the best pieces of evidence yet that many supermassive black holes are spinning extremely rapidly. The whirling of these giant black holes drives powerful jets that pump huge amounts of energy into their environment and affects galaxy growth. A team of scientists compared leading theories of jets produced by rotating supermassive black holes with Chandra data. A sampling of nine giant galaxies that exhibit large disturbances in their gaseous atmospheres showed that the central black holes in these galaxies must be spinning at near their maximum rates. People Who Read This Also Read... NASA’s Swift Satellite Catches First Supernova in The Act of Exploding Black Holes Have Simple Feeding Habits Jet Power and Black Hole Assortment Revealed in New Chandra Image Erratic Black Hole Regulates Itself "We think these monster black holes are spinning close to the limit set by Einstein's theory of relativity, which means that they can drag material around them at close to the speed of light," said Rodrigo Nemmen, a visiting graduate student at Penn State University, and lead author of a paper on the new results presented at American Astronomical Society in Austin, Texas. The research reinforces other, less direct methods previously used which have indicated that some stellar and supermassive black holes are spinning rapidly. According to Einstein's theory, a rapidly spinning black hole makes space itself rotate. This effect, coupled with gas spiraling toward the black hole, can produce a rotating, tightly wound vertical tower of magnetic field that flings a large fraction of the inflowing gas away from the vicinity of the black hole in an energetic, high-speed jet. Computer simulations by other authors have suggested that black holes may acquire their rapid spins when galaxies merge, and through the accretion of gas from their surroundings. "Extremely fast spin might be very common for large black holes," said co-investigator Richard Bower of Durham University. "This might help us explain the source of these incredible jets that we see stretching for enormous distances across space." One significant connection consequence of powerful, black-hole jets in galaxies in the centers of galaxy clusters is that they can pump enormous amounts of energy into their environments, and heat the gas around them. This heating prevents the gas from cooling, and affects the rate at which new stars form, thereby limiting the size of the central galaxy. Understanding the details of this fundamental feedback loop between supermassive black holes and the formation of the most massive galaxies remains an important goal in astrophysics. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the agency's Science Mission Directorate. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass.
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L; Choset, Howie; Umbanhowar, Paul B; Goldman, Daniel I
2016-11-01
Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.
NASA Astrophysics Data System (ADS)
Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L.; Choset, Howie; Umbanhowar, Paul B.; Goldman, Daniel I.
2016-11-01
Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems—a ‘locomotion robophysics’—which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others’ research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Integrated design, execution, and analysis of arrayed and pooled CRISPR genome-editing experiments.
Canver, Matthew C; Haeussler, Maximilian; Bauer, Daniel E; Orkin, Stuart H; Sanjana, Neville E; Shalem, Ophir; Yuan, Guo-Cheng; Zhang, Feng; Concordet, Jean-Paul; Pinello, Luca
2018-05-01
CRISPR (clustered regularly interspaced short palindromic repeats) genome-editing experiments offer enormous potential for the evaluation of genomic loci using arrayed single guide RNAs (sgRNAs) or pooled sgRNA libraries. Numerous computational tools are available to help design sgRNAs with optimal on-target efficiency and minimal off-target potential. In addition, computational tools have been developed to analyze deep-sequencing data resulting from genome-editing experiments. However, these tools are typically developed in isolation and oftentimes are not readily translatable into laboratory-based experiments. Here, we present a protocol that describes in detail both the computational and benchtop implementation of an arrayed and/or pooled CRISPR genome-editing experiment. This protocol provides instructions for sgRNA design with CRISPOR (computational tool for the design, evaluation, and cloning of sgRNA sequences), experimental implementation, and analysis of the resulting high-throughput sequencing data with CRISPResso (computational tool for analysis of genome-editing outcomes from deep-sequencing data). This protocol allows for design and execution of arrayed and pooled CRISPR experiments in 4-5 weeks by non-experts, as well as computational data analysis that can be performed in 1-2 d by both computational and noncomputational biologists alike using web-based and/or command-line versions.
EDITORIAL: XXVI IUPAP Conference on Computational Physics (CCP2014)
NASA Astrophysics Data System (ADS)
Sandvik, A. W.; Campbell, D. K.; Coker, D. F.; Tang, Y.
2015-09-01
The 26th IUPAP Conference on Computational Physics, CCP2014, was held in Boston, Massachusetts, during August 11-14, 2014. Almost 400 participants from 38 countries convened at the George Sherman Union at Boston University for four days of plenary and parallel sessions spanning a broad range of topics in computational physics and related areas. The first meeting in the series that developed into the annual Conference on Computational Physics (CCP) was held in 1989, also on the campus of Boston University and chaired by our colleague Claudio Rebbi. The express purpose of that meeting was to discuss the progress, opportunities and challenges of common interest to physicists engaged in computational research. The conference having returned to the site of its inception, it is interesting to recect on the development of the field during the intervening years. Though 25 years is a short time for mankind, computational physics has taken giant leaps during these years, not only because of the enormous increases in computer power but especially because of the development of new methods and algorithms, and the growing awareness of the opportunities the new technologies and methods can offer. Computational physics now represents a ''third leg'' of research alongside analytical theory and experiments in almost all subfields of physics, and because of this there is also increasing specialization within the community of computational physicists. It is therefore a challenge to organize a meeting such as CCP, which must have suffcient depth in different areas to hold the interest of experts while at the same time being broad and accessible. Still, at a time when computational research continues to gain in importance, the CCP series is critical in the way it fosters cross-fertilization among fields, with many participants specifically attending in order to get exposure to new methods in fields outside their own. As organizers and editors of these Proceedings, we are very pleased with the high quality of the papers provided by the participants. These articles represent a good cross-section of what was presented at the meeting, and it is our hope that they will not only be useful individually for their specific scientific content but will also represent a historical snapshot of the state of computational physics that they represent collectively. The remainder of this Preface contains lists detailing the organizational structure of CCP2014, endorsers and sponsors of the meeting, plenary and invited talks, and a presentation of the 2014 IUPAP C20 Young Scientist Prize. We would like to take the opportunity to again thank all those who contributed to the success of CCP214, as organizers, sponsors, presenters, exhibitors, and participants. Anders Sandvik, David Campbell, David Coker, Ying Tang
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Altschuler, E.L.; Dowla, F.U.
1998-11-24
The encephalolexianalyzer uses digital signal processing techniques on electroencephalograph (EEG) brain waves to determine whether or not someone is thinking about moving, e.g., tapping their fingers, or, alternatively, whether someone is actually moving, e.g., tapping their fingers, or at rest, i.e., not moving and not thinking of moving. The mu waves measured by a pair of electrodes placed over the motor cortex are signal processed to determine the power spectrum. At rest, the peak value of the power spectrum in the 8-13 Hz range is high, while when moving or thinking of moving, the peak value of the power spectrum in the 8-13 Hz range is low. This measured change in signal power spectrum is used to produce a control signal. The encephalolexianalyzer can be used to communicate either directly using Morse code, or via a cursor controlling a remote control; the encephalolexianalyzer can also be used to control other devices. The encephalolexianalyzer will be of great benefit to people with various handicaps and disabilities, and also has enormous commercial potential, as well as being an invaluable tool for studying the brain. 14 figs.
Altschuler, Eric L.; Dowla, Farid U.
1998-01-01
The encephalolexianalyzer uses digital signal processing techniques on electroencephalograph (EEG) brain waves to determine whether or not someone is thinking about moving, e.g., tapping their fingers, or, alternatively, whether someone is actually moving, e.g., tapping their fingers, or at rest, i.e., not moving and not thinking of moving. The mu waves measured by a pair of electrodes placed over the motor cortex are signal processed to determine the power spectrum. At rest, the peak value of the power spectrum in the 8-13 Hz range is high, while when moving or thinking of moving, the peak value of the power spectrum in the 8-13 Hz range is low. This measured change in signal power spectrum is used to produce a control signal. The encephalolexianalyzer can be used to communicate either directly using Morse code, or via a cursor controlling a remote control; the encephalolexianalyzer can also be used to control other devices. The encephalolexianalyzer will be of great benefit to people with various handicaps and disabilities, and also has enormous commercial potential, as well as being an invaluable tool for studying the brain.
Interactive visualization of Earth and Space Science computations
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise
1994-01-01
Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.
Mesoscale Models of Fluid Dynamics
NASA Astrophysics Data System (ADS)
Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.
During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.
Exploiting analytics techniques in CMS computing monitoring
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.
2017-10-01
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.
Unidata Cyberinfrastructure in the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Young, J. W.
2016-12-01
Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.
Introducing the Benson Prize for Discovery Methods of Near Earth Objects by Amateurs
NASA Astrophysics Data System (ADS)
Benson, J. W.
1997-05-01
The Benson Prize Sponsored by Space Development Corporation The Benson Prize for Discovery Methods of Near Earth Objects by Amateurs is an annual competition which awards prizes to the best proposed methods by which amateur astronomers may discover such near earth objects as asteroids and comet cores. The purpose of the Benson Prize is to encourage the discovery of near earth objects by amateur astronomers. The utilization of valuable near earth resources can provide many new jobs and economic activities on earth, while also creating many new opportunities for opening up the space frontier. The utilization of near earth resources will significantly contribute to the lessening of environmental degradation on the Earth caused by mining and chemical leaching operations required to exploit the low grade ores now remaining on Earth. In addition, near earth objects pose grave dangers for life on earth. Discovering and plotting the orbits of all potentially dangerous near earth objects is the first and necessary step in protecting ourselves against the enormous potential damage possible from near earth objects. With the high quality, large size and low cost of todays consumer telescopes, the rapid development of powerful, high resolution and inexpensive CCD cameras, and the proliferation of inexpensive software for todays powerful home computers, the discovery of near earth objects by amateur astronomers is more attainable than ever. The Benson Prize is sponsored by the Space Development Corporation, a space resource exploration and utilization company. In 1997 one prize of \\500 will be awarded to the best proposed method for the amateur discovery of NEOs, and in each of the four following years, Prizes of \\500, \\250 and \\100 will be awarded. Prizes for the actual discovery of Near Earth Asteroids will be added in later years.
Physical Analytics: An emerging field with real-world applications and impact
NASA Astrophysics Data System (ADS)
Hamann, Hendrik
2015-03-01
In the past most information on the internet has been originated by humans or computers. However with the emergence of cyber-physical systems, vast amount of data is now being created by sensors from devices, machines etc digitizing the physical world. While cyber-physical systems are subject to active research around the world, the vast amount of actual data generated from the physical world has attracted so far little attention from the engineering and physics community. In this presentation we use examples to highlight the opportunities in this new subject of ``Physical Analytics'' for highly inter-disciplinary research (including physics, engineering and computer science), which aims understanding real-world physical systems by leveraging cyber-physical technologies. More specifically, the convergence of the physical world with the digital domain allows applying physical principles to everyday problems in a much more effective and informed way than what was possible in the past. Very much like traditional applied physics and engineering has made enormous advances and changed our lives by making detailed measurements to understand the physics of an engineered device, we can now apply the same rigor and principles to understand large-scale physical systems. In the talk we first present a set of ``configurable'' enabling technologies for Physical Analytics including ultralow power sensing and communication technologies, physical big data management technologies, numerical modeling for physical systems, machine learning based physical model blending, and physical analytics based automation and control. Then we discuss in detail several concrete applications of Physical Analytics ranging from energy management in buildings and data centers, environmental sensing and controls, precision agriculture to renewable energy forecasting and management.
NASA Technical Reports Server (NTRS)
Prufert-Bebout, Lee; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
Microbial life on Earth is enormously abundant at sediment-water interfaces. The fossil record in fact contains abundant evidence of the preservation of life on such surfaces. It is therefore critical to our interpretation of early Earth history, and potentially to history of life on other planets, to be able to recognize life forms at these interfaces. On Earth this life often occurs as organized structures of microbes and their extracellular exudates known as biofilms. When such biofilms occur in areas receiving sunlight photosynthetic biofilms are the dominant form in natural ecosystems due to selective advantage inherent in their ability to utilize solar energy. Cyanobacteria are the dominant phototrophic microbes in most modern and ancient photosynthetic biofilms, microbial mats and stromatolites. Due to their long (3.5 billion year) evolutionary history, this group has extensively diversified resulting in an enormous array of morphologies and physiological abilities. This enormous diversity and specialization results in very specific selection for a particular cyanobacterium in each available photosynthetic niche. Furthermore these organisms can alter their spatial orientation, cell morphology, pigmentation and associations with heterotrophic organisms in order to fine tune their optimization to a given micro-niche. These adaptations can be detected, and if adequate knowledge of the interaction between environmental conditions and organism response is available, the detectable organism response can be used to infer the environmental conditions causing that response. This presentation will detail two specific examples which illustrate this point, Light and water are essential to photosynthesis in cyanobacteria and these organisms have specific detectable behavioural responses to these parameters. We will present cyanobacterial responses to quantified flow and irradiance to demonstrate the interpretative power of distribution and orientation information. This study presents new results, but many such examples are already found in the literature.
Impact of Interstellar Vehicle Acceleration and Cruise Velocity on Total Mission Mass and Trip Time
NASA Technical Reports Server (NTRS)
Frisbee, Robert H.
2006-01-01
Far-term interstellar missions, like their near-term solar system exploration counterparts, seek to minimize overall mission trip time and transportation system mass. Trip time is especially important in interstellar missions because of the enormous distances between stars and the finite limit of the speed of light (c). In this paper, we investigate the impact of vehicle acceleration and maximum or cruise velocity (Vcruise) on the total mission trip time. We also consider the impact that acceleration has on the transportation system mass (M) and power (P) (e.g., acceleration approx. power/mass and mass approx. power), as well as the impact that the cruise velocity has on the vehicle mass (e.g., the total mission change in velocity ((Delta)V) approx. Vcruise). For example, a Matter-Antimatter Annihilation Rocket's wet mass (Mwet) with propellant (Mp) will be a function of the dry mass of the vehicle (Mdry) and (Delta)V through the Rocket Equation. Similarly, a laser-driven LightSail's sail mass and laser power and mass will be a function of acceleration, Vcruise, and power-beaming distance (because of the need to focus the laser beam over interstellar distances).
Spinello, Angelo; Magistrato, Alessandra
2017-08-01
Metallo-drugs have attracted enormous interest for cancer treatment. The achievements of this drug-type are summarized by the success story of cisplatin. That being said, there have been many drawbacks with its clinical use, which prompted decades worth of research efforts to move towards safer and more effective agents, either containing platinum or different metals. Areas covered: In this review, the authors provide an atomistic picture of the molecular mechanisms involving selected metallo-drugs from structural and molecular simulation studies. They also provide an omics perspective, pointing out many unsettled aspects of the most relevant families of metallo-drugs at an epigenetic level. Expert opinion: Molecular simulations are able to provide detailed information at atomistic and temporal (ps) resolutions that are rarely accessible to experiments. The increasing accuracy of computational methods and the growing performance of computational platforms, allow us to mirror wet lab experiments in silico. Consequently, the molecular mechanisms of drugs action/failure can be directly viewed on a computer screen, like a 'computational microscope', allowing us to harness this knowledge for the design of the next-generation of metallo-drugs.
The MOS silicon gate technology and the first microprocessors
NASA Astrophysics Data System (ADS)
Faggin, F.
2015-12-01
Today we are so used to the enormous capabilities of microelectronics that it is hard to imagine what it might have been like in the early Sixties and Seventies when much of the technology we use today was being developed. This paper will first present a brief history of microelectronics and computers, taking us to the threshold of the inventions of the MOS silicon gate technology and the microprocessor. These two creations provided the basic technology that would allow only a few years later to merge microelectronics and computers into the first commercial monolithic computer. By the late Seventies, the first monolithic computer weighting less than one gram, occupying a volume of less than one cubic centimeter, dissipating less than one Watt, and selling for less than ten dollars, could perform more information processing than the UNIVAC I, the first commercial electronic computer introduced in 1951, made with 5200 vacuum tubes, dissipating 125kW, weighting 13 metric tons, occupying a room larger than 35m2, and selling for more than one million dollars per unit. The first-person story of the SGT and the early microprocessors will be told by the Italian-born physicist who led both projects.
Sb7Te3/Ge multilayer films for low power and high speed phase-change memory
NASA Astrophysics Data System (ADS)
Chen, Shiyu; Wu, Weihua; Zhai, Jiwei; Song, Sannian; Song, Zhitang
2017-06-01
Phase-change memory has attracted enormous attention for its excellent properties as compared to flash memories due to their high speed, high density, better date retention and low power consumption. Here we present Sb7Te3/Ge multilayer films by using a magnetron sputtering method. The 10 years’ data retention temperature is significantly increased compared with pure Sb7Te3. When the annealing temperature is above 250 °C, the Sb7Te3/Ge multilayer thin films have better interface properties, which renders faster crystallization speed and high thermal stability. The decrease in density of ST/Ge multilayer films is only around 5%, which is very suitable for phase change materials. Moreover, the low RESET power benefits from high resistivity and better thermal stability in the PCM cells. This work demonstrates that the multilayer configuration thin films with tailored properties are beneficial for improving the stability and speed in phase change memory applications.
Ku, Nai-Jen; Liu, Guocheng; Wang, Chao-Hung; Gupta, Kapil; Liao, Wei-Shun; Ban, Dayan; Liu, Chuan-Pu
2017-09-28
Piezoelectric nanogenerators have been investigated to generate electricity from environmental vibrations due to their energy conversion capabilities. In this study, we demonstrate an optimal geometrical design of inertial vibration direct-current piezoelectric nanogenerators based on obliquely aligned InN nanowire (NW) arrays with an optimized oblique angle of ∼58°, and driven by the inertial force of their own weight, using a mechanical shaker without any AC/DC converters. The nanogenerator device manifests potential applications not only as a unique energy harvesting device capable of scavenging energy from weak mechanical vibrations, but also as a sensitive strain sensor. The maximum output power density of the nanogenerator is estimated to be 2.9 nW cm -2 , leading to an improvement of about 3-12 times that of vertically aligned ZnO NW DC nanogenerators. Integration of two nanogenerators also exhibits a linear increase in the output power, offering an enormous potential for the creation of self-powered sustainable nanosystems utilizing incessantly natural ambient energy sources.
Broad area quantum cascade lasers operating in pulsed mode above 100 °C λ ∼4.7 μm
NASA Astrophysics Data System (ADS)
Zhao, Yue; Yan, Fangliang; Zhang, Jinchuan; Liu, Fengqi; Zhuo, Ning; Liu, Junqi; Wang, Lijun; Wang, Zhanguo
2017-07-01
We demonstrate a broad area (400 μm) high power quantum cascade laser (QCL). A total peak power of 62 W operating at room temperature is achieved at λ ∼4.7 μm. The temperature dependence of the peak power characteristic is given in the experiment, and also the temperature of the active zone is simulated by a finite-element-method (FEM). We find that the interface roughness of the active core has a great effect on the temperature of the active zone and can be enormously improved using the solid source molecular beam epitaxy (MBE) growth system. Project supported by the National Basic Research Program of China (No. 2013CB632801), the National Key Research and Development Program (No. 2016YFB0402303), the National Natural Science Foundation of China (Nos. 61435014, 61627822, 61574136, 61306058, 61404131), the Key Projects of Chinese Academy of Sciences (No. ZDRW-XH-20164), and the Beijing Natural Science Foundation (No. 4162060).
Production Management System for AMS Computing Centres
NASA Astrophysics Data System (ADS)
Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.
2017-10-01
The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.
Computational clustering for viral reference proteomes
Chen, Chuming; Huang, Hongzhan; Mazumder, Raja; Natale, Darren A.; McGarvey, Peter B.; Zhang, Jian; Polson, Shawn W.; Wang, Yuqi; Wu, Cathy H.
2016-01-01
Motivation: The enormous number of redundant sequenced genomes has hindered efforts to analyze and functionally annotate proteins. As the taxonomy of viruses is not uniformly defined, viral proteomes pose special challenges in this regard. Grouping viruses based on the similarity of their proteins at proteome scale can normalize against potential taxonomic nomenclature anomalies. Results: We present Viral Reference Proteomes (Viral RPs), which are computed from complete virus proteomes within UniProtKB. Viral RPs based on 95, 75, 55, 35 and 15% co-membership in proteome similarity based clusters are provided. Comparison of our computational Viral RPs with UniProt’s curator-selected Reference Proteomes indicates that the two sets are consistent and complementary. Furthermore, each Viral RP represents a cluster of virus proteomes that was consistent with virus or host taxonomy. We provide BLASTP search and FTP download of Viral RP protein sequences, and a browser to facilitate the visualization of Viral RPs. Availability and implementation: http://proteininformationresource.org/rps/viruses/ Contact: chenc@udel.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153712
Anyonic braiding in optical lattices
Zhang, Chuanwei; Scarola, V. W.; Tewari, Sumanta; Das Sarma, S.
2007-01-01
Topological quantum states of matter, both Abelian and non-Abelian, are characterized by excitations whose wavefunctions undergo nontrivial statistical transformations as one excitation is moved (braided) around another. Topological quantum computation proposes to use the topological protection and the braiding statistics of a non-Abelian topological state to perform quantum computation. The enormous technological prospect of topological quantum computation provides new motivation for experimentally observing a topological state. Here, we explicitly work out a realistic experimental scheme to create and braid the Abelian topological excitations in the Kitaev model built on a tunable robust system, a cold atom optical lattice. We also demonstrate how to detect the key feature of these excitations: their braiding statistics. Observation of this statistics would directly establish the existence of anyons, quantum particles that are neither fermions nor bosons. In addition to establishing topological matter, the experimental scheme we develop here can also be adapted to a non-Abelian topological state, supported by the same Kitaev model but in a different parameter regime, to eventually build topologically protected quantum gates. PMID:18000038
DNA methylation data analysis and its application to cancer research
Ma, Xiaotu; Wang, Yi-Wei; Zhang, Michael Q; Gazdar, Adi F
2013-01-01
With the rapid development of genome-wide high-throughput technologies, including expression arrays, SNP arrays and next-generation sequencing platforms, enormous amounts of molecular data have been generated and deposited in the public domain. The application of computational approaches is required to yield biological insights from this enormous, ever-growing resource. A particularly interesting subset of these resources is related to epigenetic regulation, with DNA methylation being the most abundant data type. In this paper, we will focus on the analysis of DNA methylation data and its application to cancer studies. We first briefly review the molecular techniques that generate such data, much of which has been obtained with the use of the most recent version of Infinium HumanMethylation450 BeadChip® technology (Illumina, CA, USA). We describe the coverage of the methylome by this technique. Several examples of data mining are provided. However, it should be understood that reliance on a single aspect of epigenetics has its limitations. In the not too distant future, these defects may be rectified, providing scientists with previously unavailable opportunities to explore in detail the role of epigenetics in cancer and other disease states. PMID:23750645
Data Processing Center of Radioastron Project: 3 years of operation.
NASA Astrophysics Data System (ADS)
Shatskaya, Marina
ASC DATA PROCESSING CENTER (DPC) of Radioastron Project is a fail-safe complex centralized system of interconnected software/ hardware components along with organizational procedures. Tasks facing of the scientific data processing center are organization of service information exchange, collection of scientific data, storage of all of scientific data, data science oriented processing. DPC takes part in the informational exchange with two tracking stations in Pushchino (Russia) and Green Bank (USA), about 30 ground telescopes, ballistic center, tracking headquarters and session scheduling center. Enormous flows of information go to Astro Space Center. For the inquiring of enormous data volumes we develop specialized network infrastructure, Internet channels and storage. The computer complex has been designed at the Astro Space Center (ASC) of Lebedev Physical Institute and includes: - 800 TB on-line storage, - 2000 TB hard drive archive, - backup system on magnetic tapes (2000 TB); - 24 TB redundant storage at Pushchino Radio Astronomy Observatory; - Web and FTP servers, - DPC management and data transmission networks. The structure and functions of ASC Data Processing Center are fully adequate to the data processing requirements of the Radioastron Mission and has been successfully confirmed during Fringe Search, Early Science Program and first year of Key Science Program.
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
Unlocking the Power of Big Data at the National Institutes of Health.
Coakley, Meghan F; Leerkes, Maarten R; Barnett, Jason; Gabrielian, Andrei E; Noble, Karlynn; Weber, M Nick; Huyen, Yentram
2013-09-01
The era of "big data" presents immense opportunities for scientific discovery and technological progress, with the potential to have enormous impact on research and development in the public sector. In order to capitalize on these benefits, there are significant challenges to overcome in data analytics. The National Institute of Allergy and Infectious Diseases held a symposium entitled "Data Science: Unlocking the Power of Big Data" to create a forum for big data experts to present and share some of the creative and innovative methods to gleaning valuable knowledge from an overwhelming flood of biological data. A significant investment in infrastructure and tool development, along with more and better-trained data scientists, may facilitate methods for assimilation of data and machine learning, to overcome obstacles such as data security, data cleaning, and data integration.
Unlocking the Power of Big Data at the National Institutes of Health
Coakley, Meghan F.; Leerkes, Maarten R.; Barnett, Jason; Gabrielian, Andrei E.; Noble, Karlynn; Weber, M. Nick
2013-01-01
Abstract The era of “big data” presents immense opportunities for scientific discovery and technological progress, with the potential to have enormous impact on research and development in the public sector. In order to capitalize on these benefits, there are significant challenges to overcome in data analytics. The National Institute of Allergy and Infectious Diseases held a symposium entitled “Data Science: Unlocking the Power of Big Data” to create a forum for big data experts to present and share some of the creative and innovative methods to gleaning valuable knowledge from an overwhelming flood of biological data. A significant investment in infrastructure and tool development, along with more and better-trained data scientists, may facilitate methods for assimilation of data and machine learning, to overcome obstacles such as data security, data cleaning, and data integration. PMID:27442200
The power of the Brown v. Board of Education decision: theorizing threats to sustainability.
Fine, Michelle
2004-09-01
Interviews with African American and White American elders capture the immediate power of the Brown v. Board of Education (1954) decision and the biography of its impact over time. This article reviews the lived experience of the decision and theorizes 3 threats to sustainability that ruthlessly undermined the decision over time: (a) the unacknowledged and enormous sacrifice endured by the African American community in the name of desegregation; b) the violent and relentless resistance to the decision by government officials, educators, and many White community members; and (c) the dramatic shrinkage of the vision of Brown from the dismantling of White supremacy to a technical matter of busing. Implications are drawn for the study of desegregation and for the study of sustainability of social justice more broadly. ((c) 2004 APA, all rights reserved)
Strategic Technologies for Deep Space Transport
NASA Technical Reports Server (NTRS)
Litchford, Ronald J.
2016-01-01
Deep space transportation capability for science and exploration is fundamentally limited by available propulsion technologies. Traditional chemical systems are performance plateaued and require enormous Initial Mass in Low Earth Orbit (IMLEO) whereas solar electric propulsion systems are power limited and unable to execute rapid transits. Nuclear based propulsion and alternative energetic methods, on the other hand, represent potential avenues, perhaps the only viable avenues, to high specific power space transport evincing reduced trip time, reduced IMLEO, and expanded deep space reach. Here, key deep space transport mission capability objectives are reviewed in relation to STMD technology portfolio needs, and the advanced propulsion technology solution landscape is examined including open questions, technical challenges, and developmental prospects. Options for potential future investment across the full compliment of STMD programs are presented based on an informed awareness of complimentary activities in industry, academia, OGAs, and NASA mission directorates.
NASA Astrophysics Data System (ADS)
Homeyer, H.; Mahnke, H.-E.
1996-12-01
Energetic ion beams, originally the domain of nuclear physics, become increasingly important tools in many other fields of research and development. The choice of ion species and ion energy allows an enormously wide variation of the penetration depth and of the amount of the electronic stopping power. These features are utilized to modify or damage materials and living tissues in a specific way. Materials modification with energetic ion beams is one of the central aims of research and development at the ion beam laboratory, ISL-Berlin, a center for ion-beam applications at the Hahn-Meitner-Institut Berlin. In particular, energetic protons will be used for eye cancer treatment. Selected topics such as the "single-event burnout" of high power diodes and the eye cancer therapy setup will be presented in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quester, G.H.
The gap between studies of military history and military strategy is ever widening. The enormous destructive power of nuclear weapons has tended to persuade us that the military experience of the first half of this century is not relevant to more ''modern'' military questions. In Deterrence before Hiroshima, first published in 1966, George H. Quester analyzes pre-nuclear age theories of deterrence to equip us with a perspective and data by which current theories can be evaluated. Quester shows that from almost the time of the first military aircraft, air-power was believed to have the capacity for apocalyptic destruction. He pointsmore » out that the modern terms deterrence, limited war, tacit agreement, and balance of terror, show up often in the literature from 1900-1945, coupled with war scenarios every bit as awesome as a nuclear holocaust.« less
Metal Matrix Superconductor Composites for SMES-Driven, Ultra High Power BEP Applications: Part 2
NASA Astrophysics Data System (ADS)
Gross, Dan A.; Myrabo, Leik N.
2006-05-01
A 2.5 TJ superconducting magnetic energy storage (SMES) design presentation is continued from the preceding paper (Part 1) with electromagnetic and associated stress analysis. The application of interest is a rechargeable power-beaming infrastructure for manned microwave Lightcraft operations. It is demonstrated that while operational performance is within manageable parameter bounds, quench (loss of superconducting state) imposes enormous electrical stresses. Therefore, alternative multiple toroid modular configurations are identified, alleviating simultaneously all excessive stress conditions, operational and quench, in the structural, thermal and electromagnetic sense — at some reduction in specific energy, but presenting programmatic advantages for a lengthy technology development, demonstration and operation schedule. To this end several natural units, based on material properties and operating parameters are developed, in order to identify functional relationships and optimization paths more effectively.
The potential application and challenge of powerful CRISPR/Cas9 system in cardiovascular research.
Li, Yangxin; Song, Yao-Hua; Liu, Bin; Yu, Xi-Yong
2017-01-15
CRISPR/Cas9 is a precision-guided munition found in bacteria to fight against invading viruses. This technology has enormous potential applications, including altering genes in both somatic and germ cells, as well as generating knockout animals. Compared to other gene editing techniques such as zinc finger nucleases and TALENS, CRISPR/Cas9 is much easier to use and highly efficient. Importantly, the multiplex capacity of this technology allows multiple genes to be edited simultaneously. CRISPR/Cas9 also has the potential to prevent and cure human diseases. In this review, we wish to highlight some key points regarding the future prospect of using CRISPR/Cas9 as a powerful tool for cardiovascular research, and as a novel therapeutic strategy to treat cardiovascular diseases. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Lee, W Anthony
2007-01-01
The gold standard for preoperative evaluation of an aortic aneurysm is a computed tomography angiogram (CTA). Three-dimensional reconstruction and analysis of the computed tomography data set is enormously helpful, and even sometimes essential, in proper sizing and planning for endovascular stent graft repair. To a large extent, it has obviated the need for conventional angiography for morphologic evaluation. The TeraRecon Aquarius workstation (San Mateo, Calif) represents a highly sophisticated but user-friendly platform utilizing a combination of task-specific hardware and software specifically designed to rapidly manipulate large Digital Imaging and Communications in Medicine (DICOM) data sets and provide surface-shaded and multiplanar renderings in real-time. This article discusses the basics of sizing and planning for endovascular abdominal aortic aneurysm repair and the role of 3-dimensional analysis using the TeraRecon workstation.
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
A Computer Knowledge Database of accidents at work in the construction industry
NASA Astrophysics Data System (ADS)
Hoła, B.; Szóstak, M.
2017-10-01
At least 60,000 fatal accidents at work occur on building sites all over the world each year, which means that on average, every 10 minutes an employee dies during the execution of work. In 2015 on Polish building sites, 5,776 accidents at work happened, of which 69 resulted in the death of an employee. Accidents are an enormous social and economic burden for companies, communities and countries. The vast majority of accidents at work can be prevented by appropriate and effective preventive measures. Therefore, the Computer Knowledge Database (CKD) was formulated for this purpose and it enables data and information on accidents at work in the construction industry to be collected and processed in order to obtain necessary knowledge. This gained knowledge will be the basis to form conclusions of a preventive nature
Tumor purity and differential methylation in cancer epigenomics.
Wang, Fayou; Zhang, Naiqian; Wang, Jun; Wu, Hao; Zheng, Xiaoqi
2016-11-01
DNA methylation is an epigenetic modification of DNA molecule that plays a vital role in gene expression regulation. It is not only involved in many basic biological processes, but also considered an important factor for tumorigenesis and other human diseases. Study of DNA methylation has been an active field in cancer epigenomics research. With the advances of high-throughput technologies and the accumulation of enormous amount of data, method development for analyzing these data has gained tremendous interests in the fields of computational biology and bioinformatics. In this review, we systematically summarize the recent developments of computational methods and software tools in high-throughput methylation data analysis with focus on two aspects: differential methylation analysis and tumor purity estimation in cancer studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; ...
2016-01-19
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the mostmore » crucial and relevant predictors. As a result, the developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance.« less
NASA Technical Reports Server (NTRS)
Prufert-Bebout, Lee
2001-01-01
Microbial life on Earth is enormously abundant at sediment-water interfaces. The fossil record in fact contains abundant evidence of the preservation of life on such surfaces. It is therefore critical to our interpretation of early Earth history, and potentially to history of life on other planets, to be able to recognize life forms at these interfaces. On Earth this life often occurs as organized structures of microbes and their extracellular exudates known as biofilms. When such biofilms occur in areas receiving sunlight photosynthetic biofilms are the dominant form in natural ecosystems due to selective advantage inherent in their ability to utilize solar energy. Cyanobacteria are the dominant phototrophic microbes in most modern and ancient photosynthetic biofilms, microbial mats and stromatolites. Due to their long (3.5 billion year) evolutionary history, this group has extensively diversified resulting in an enormous array of morphologies and physiological abilities. This enormous diversity and specialization results in very specific selection for a particular cyanobacterium in each available photosynthetic niche. Furthermore these organisms can alter their spatial orientation, cell morphology, pigmentation and associations with heterotrophic organisms in order to fine tune their optimization to a given micro-niche. These adaptations can be detected, and if adequate knowledge of the interaction between environmental conditions and organism response is available, the detectable organism response can be used to infer the environmental conditions causing that response. This presentation will detail two specific examples which illustrate this point. Light and water are essential to photosynthesis in cyanobacteria and these organisms have specific detectable behavioral responses to these parameters. We will present cyanobacterial responses to quantified flow and irradiance to demonstrate the interpretative power of distribution and orientation information. This study presents new results, but many such examples are already found in the literature. However this information exists in such a wide variety of journals, spanning decades of research that the utility of the vast storehouse of information is limited, not by the ability of cyanobacteria to respond in recognizable ways to environmental stimuli, but by our ability to compile and use this information. Recent advances in information technology will soon allow us to overcome these difficulties and utilize the detailed responses of cyanobacteria to environmental microniches as powerful records of the interaction between the biosphere and lithosphere.
Strogatz, S H
2001-03-08
The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems-be they neurons, power stations or lasers-will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks.
How dangerous are mobile phones, transmission masts, and electricity pylons?
Wood, A W
2006-04-01
Electrical power and mobile communications deliver enormous benefit to society, but there are concerns whether the electric and magnetic field (EMF) emissions associated with the delivery of this benefit are linked to cancer or other health hazards. This article reviews the strength of the available epidemiological and laboratory evidence and notes that this falls short of what is normally required to establish a causal link. However, because of scientific uncertainty a cautious approach is often advocated, but here, too, there may be a tendency to judge these risks more harshly than those in other areas with similar strength of evidence.
Waste incineration industry and development policies in China.
Li, Yun; Zhao, Xingang; Li, Yanbin; Li, Xiaoyu
2015-12-01
The growing pollution from municipal solid waste due to economic growth and urbanization has brought great challenge to China. The main method of waste disposal has gradually changed from landfill to incineration, because of the enormous land occupation by landfills. The paper presents the results of a study of the development status of the upstream and downstream of the waste incineration industry chain in China, reviews the government policies for the waste incineration power industry, and provides a forecast of the development trend of the waste incineration industry. Copyright © 2015 Elsevier Ltd. All rights reserved.
Massively parallel GPU-accelerated minimization of classical density functional theory
NASA Astrophysics Data System (ADS)
Stopper, Daniel; Roth, Roland
2017-08-01
In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.
Cui, Zhihua; Zhang, Yi
2014-02-01
As a promising and innovative research field, bioinformatics has attracted increasing attention recently. Beneath the enormous number of open problems in this field, one fundamental issue is about the accurate and efficient computational methodology that can deal with tremendous amounts of data. In this paper, we survey some applications of swarm intelligence to discover patterns of multiple sequences. To provide a deep insight, ant colony optimization, particle swarm optimization, artificial bee colony and artificial fish swarm algorithm are selected, and their applications to multiple sequence alignment and motif detecting problem are discussed.
Signorelli, C; Lepratto, M; Summa, A
2005-01-01
The enormous increasing of computer use in work activities has carried great progresses and many other advantages, but it has brought also possible health problems for the workers. The occupational risk in VDT workers involves the visual system, work-related muscoloskeletal disorders and also the mental state. This article concerns the major problems related to the obligations of the employer and to health surveillance, with special care to ophtalmologist examination for the ability, the responsibility and duty of occupational physicians (medici competenti) and the possible role of the ophthalmologists.
Integrating perioperative information from divergent sources.
Frost, Elizabeth A M
2012-01-01
The enormous diversity of physician practices, including specialists, and patient requirements and comorbidities make integration of appropriate perioperative information difficult. Lack of communicating computer systems adds to the difficulty of assembling data. Meta analysis and evidence-based studies indicate that far too many tests are performed perioperatively. Guidelines for appropriate perioperative management have been formulated by several specialties. Education as to current findings and requirements should be better communicated to surgeons, consultants, and patients to improve healthcare needs and at the same time decrease costs. Means to better communication by interpersonal collaboration are outlined. © 2012 Mount Sinai School of Medicine.
Average power scaling of UV excimer lasers drives flat panel display and lidar applications
NASA Astrophysics Data System (ADS)
Herbst, Ludolf; Delmdahl, Ralph F.; Paetzel, Rainer
2012-03-01
Average power scaling of 308nm excimer lasers has followed an evolutionary path over the last two decades driven by diverse industrial UV laser microprocessing markets. Recently, a new dual-oscillator and beam management concept for high-average power upscaling of excimer lasers has been realized, for the first time enabling as much as 1.2kW of stabilized UV-laser average output power at a UV wavelength of 308nm. The new dual-oscillator concept enables low temperature polysilicon (LTPS) fabrication to be extended to generation six glass substrates. This is essential in terms of a more economic high-volume manufacturing of flat panel displays for the soaring smartphone and tablet PC markets. Similarly, the cost-effective production of flexible displays is driven by 308nm excimer laser power scaling. Flexible displays have enormous commercial potential and can largely use the same production equipment as is used for rigid display manufacturing. Moreover, higher average output power of 308nm excimer lasers aids reducing measurement time and improving the signal-to-noise ratio in the worldwide network of high altitude Raman lidar stations. The availability of kW-class 308nm excimer lasers has the potential to take LIDAR backscattering signal strength and achievable altitude to new levels.
Issues in undergraduate education in computational science and high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchioro, T.L. II; Martin, D.
1994-12-31
The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computationalmore » science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?« less
Hypersonic Shock Wave Computations Using the Generalized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh; Chen, Rui; Cheremisin, Felix G.
2006-11-01
Hypersonic shock structure in diatomic gases is computed by solving the Generalized Boltzmann Equation (GBE), where the internal and translational degrees of freedom are considered in the framework of quantum and classical mechanics respectively [1]. The computational framework available for the standard Boltzmann equation [2] is extended by including both the rotational and vibrational degrees of freedom in the GBE. There are two main difficulties encountered in computation of high Mach number flows of diatomic gases with internal degrees of freedom: (1) a large velocity domain is needed for accurate numerical description of the distribution function resulting in enormous computational effort in calculation of the collision integral, and (2) about 50 energy levels are needed for accurate representation of the rotational spectrum of the gas. Our methodology addresses these problems, and as a result the efficiency of calculations has increased by several orders of magnitude. The code has been validated by computing the shock structure in Nitrogen for Mach numbers up to 25 including the translational and rotational degrees of freedom. [1] Beylich, A., ``An Interlaced System for Nitrogen Gas,'' Proc. of CECAM Workshop, ENS de Lyon, France, 2000. [2] Cheremisin, F., ``Solution of the Boltzmann Kinetic Equation for High Speed Flows of a Rarefied Gas,'' Proc. of the 24th Int. Symp. on Rarefied Gas Dynamics, Bari, Italy, 2004.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
Bounds on the power of proofs and advice in general physical theories.
Lee, Ciarán M; Hoban, Matty J
2016-06-01
Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.
Biologically inspired technologies using artificial muscles
NASA Astrophysics Data System (ADS)
Bar-Cohen, Yoseph
2005-01-01
After billions of years of evolution, nature developed inventions that work, which are appropriate for the intended tasks and that last. The evolution of nature led to the introduction of highly effective and power efficient biological mechanisms that are scalable from micron to many meters in size. Imitating these mechanisms offers enormous potentials for the improvement of our life and the tools we use. Humans have always made efforts to imitate nature and we are increasingly reaching levels of advancement where it becomes significantly easier to imitate, copy, and adapt biological methods, processes and systems. Some of the biomimetic technologies that have emerged include artificial muscles, artificial intelligence, and artificial vision to which significant advances in materials science, mechanics, electronics, and computer science have contributed greatly. One of the newest fields of biomimetics is the electroactive polymers (EAP) that are also known as artificial muscles. To take advantage of these materials, efforts are made worldwide to establish a strong infrastructure addressing the need for comprehensive analytical modeling of their operation mechanism and develop effective processing and characterization techniques. The field is still in its emerging state and robust materials are not readily available however in recent years significant progress has been made and commercial products have already started to appear. This paper covers the state-of-the-art and challenges to making artificial muscles and their potential biomimetic applications.
NASA Astrophysics Data System (ADS)
Hayasaki, Yoshio
2017-02-01
Femtosecond laser processing is a promising tool for fabricating novel and useful structures on the surfaces of and inside materials. An enormous number of pulse irradiation points will be required for fabricating actual structures with millimeter scale, and therefore, the throughput of femtosecond laser processing must be improved for practical adoption of this technique. One promising method to improve throughput is parallel pulse generation based on a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM), a technique called holographic femtosecond laser processing. The holographic method has the advantages such as high throughput, high light use efficiency, and variable, instantaneous, and 3D patterning. Furthermore, the use of an SLM gives an ability to correct unknown imperfections of the optical system and inhomogeneity in a sample using in-system optimization of the CGH. Furthermore, the CGH can adaptively compensate in response to dynamic unpredictable mechanical movements, air and liquid disturbances, a shape variation and deformation of the target sample, as well as adaptive wavefront control for environmental changes. Therefore, it is a powerful tool for the fabrication of biological cells and tissues, because they have free form, variable, and deformable structures. In this paper, we present the principle and the experimental setup of holographic femtosecond laser processing, and the effective way for processing the biological sample. We demonstrate the femtosecond laser processing of biological materials and the processing properties.
Ultralow-power organic complementary circuits.
Klauk, Hagen; Zschieschang, Ute; Pflaum, Jens; Halik, Marcus
2007-02-15
The prospect of using low-temperature processable organic semiconductors to implement transistors, circuits, displays and sensors on arbitrary substrates, such as glass or plastics, offers enormous potential for a wide range of electronic products. Of particular interest are portable devices that can be powered by small batteries or by near-field radio-frequency coupling. The main problem with existing approaches is the large power consumption of conventional organic circuits, which makes battery-powered applications problematic, if not impossible. Here we demonstrate an organic circuit with very low power consumption that uses a self-assembled monolayer gate dielectric and two different air-stable molecular semiconductors (pentacene and hexadecafluorocopperphthalocyanine, F16CuPc). The monolayer dielectric is grown on patterned metal gates at room temperature and is optimized to provide a large gate capacitance and low gate leakage currents. By combining low-voltage p-channel and n-channel organic thin-film transistors in a complementary circuit design, the static currents are reduced to below 100 pA per logic gate. We have fabricated complementary inverters, NAND gates, and ring oscillators that operate with supply voltages between 1.5 and 3 V and have a static power consumption of less than 1 nW per logic gate. These organic circuits are thus well suited for battery-powered systems such as portable display devices and large-surface sensor networks as well as for radio-frequency identification tags with extended operating range.
Molecular dynamics simulations through GPU video games technologies
Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia
2016-01-01
Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
Discovering epistasis in large scale genetic association studies by exploiting graphics cards.
Chen, Gary K; Guo, Yunfei
2013-12-03
Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations.
Discovering epistasis in large scale genetic association studies by exploiting graphics cards
Chen, Gary K.; Guo, Yunfei
2013-01-01
Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations. PMID:24348518
Toward Accessing Spatial Structure from Building Information Models
NASA Astrophysics Data System (ADS)
Schultz, C.; Bhatt, M.
2011-08-01
Data about building designs and layouts is becoming increasingly more readily available. In the near future, service personal (such as maintenance staff or emergency rescue workers) arriving at a building site will have immediate real-time access to enormous amounts of data relating to structural properties, utilities, materials, temperature, and so on. The critical problem for users is the taxing and error prone task of interpreting such a large body of facts in order to extract salient information. This is necessary for comprehending a situation and deciding on a plan of action, and is a particularly serious issue in time-critical and safety-critical activities such as firefighting. Current unifying building models such as the Industry Foundation Classes (IFC), while being comprehensive, do not directly provide data structures that focus on spatial reasoning and spatial modalities that are required for high-level analytical tasks. The aim of the research presented in this paper is to provide computational tools for higher level querying and reasoning that shift the cognitive burden of dealing with enormous amounts of data away from the user. The user can then spend more energy and time in planning and decision making in order to accomplish the tasks at hand. We present an overview of our framework that provides users with an enhanced model of "built-up space". In order to test our approach using realistic design data (in terms of both scale and the nature of the building models) we describe how our system interfaces with IFC, and we conduct timing experiments to determine the practicality of our approach. We discuss general computational approaches for deriving higher-level spatial modalities by focusing on the example of route graphs. Finally, we present a firefighting scenario with alternative route graphs to motivate the application of our framework.
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
Smart power. Great leaders know when hard power is not enough.
Nye, Joseph S
2008-11-01
The next U.S. administration will face enormous challenges to world peace, the global economy, and the environment. Exercising military and economic muscle alone will not bring peace and prosperity. According to Nye, a former U.S. government official and a former dean at Harvard University's John F. Kennedy School of Government, the next president must be able to combine hard power, characterized by coercion, and what Nye calls "soft" power, which relies instead on attraction. The result is smart power, a tool great leaders use to mobilize people around agendas that look beyond current problems. Hard power is often necessary, Nye explains. In the 1990s, when the Taliban was providing refuge to Al Oaeda, President Clinton tried---and failed--to solve the problem diplomatically instead of destroying terrorist havens in Afghanistan. In other situations, however, soft power is more effective, though it has been too often overlooked. In Iraq, Nye argues, the use of soft power could draw young people toward something other than terrorism. "I think that there's an awakening to the need for soft power as people look at the crisis in the Middle East and begin to realize that hard power is not sufficient to resolve it," he says. Solving today's global problems will require smart power--a judicious blend of the other two powers. While there are notable examples of men who have used smart power--Teddy Roosevelt, for instance--it's much more difficult for women to lead with smart power, especially in the United States, where women feel pressure to prove that they are not "soft." Only by exercising smart power, Nye says, can the next president of the United States set a new tone for U.S. foreign policy in this century.
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
NASA Astrophysics Data System (ADS)
Prince, N. H. E.
2005-10-01
Meaning and purpose can be given to life, consciousness, the laws of physics, etc. If one assumes that the Universe is endowed with some form of (strong) anthropic principle. In particular, the final anthropic principle (FAP) of Barrow and Tipler postulates that intelligent life will continue in the Universe until the far future when the computational power of descendent civilizations will be sufficient to run simulations of enormous scale and power. Tipler has claimed that it will be possible to create simulations with rendered environments and inhabitants, i.e. intelligent software constructs, which are effectively ‘people’. Proponents of this FAP claim that if both substrate independence and the pattern identity postulate hold, then these simulations would be able to contain reanimated individuals that once lived. These claims have been heavily criticized but the growing study of physical eschatology, initiated by Freeman Dyson in a seminal work, and the developments in computational theory have made some progress in showing that simulations containing intelligent information processing software constructs, which may be conscious, are not only feasible but may be a reality within the next few centuries. In this work, arguments and conservative calculations are given which concur with these latter more minimal claims. FAP-type simulations inevitably rely on cosmology type, but current observations would seem to rule appropriate models out. However, it is argued that dark energy, described in the recent forms of ‘quintessence’ cosmological models may show the current conclusions from observations to be too presumptive. In this paper some relevant physical and cosmological aspects are reviewed in the light of the recent propositions regarding the plausibility of certain simulations given by Bostrom, and the longer held postulate of finite nature due to Fredkin which has grown in credibility, following advances in quantum mechanics and the computational theory of cellular automata. This latter postulate supports the conclusions of Bostrom, which, under certain plausible assumptions, can imply that our Universe is itself already a simulated entity. It is demonstrated in this paper how atemporal memory connections could make efficient ancestor simulations possible, solving many of the objections faced by the FAP of Barrow and Tipler. Also, if finite nature is true then it can offer a similar vindication to this FAP. Indeed the conclusions of this postulate can be realized more easily, but only if the existence of life within the simulation/Universe is not merely incidental to the (currently unknown) purpose for which it was generated to fulfil.
Beam and Plasma Physics Research
1990-06-01
La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23
3-D Electromagnetic field analysis of wireless power transfer system using K computer
NASA Astrophysics Data System (ADS)
Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi
2018-05-01
We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.
Computer literacy in nursing education. An overview.
Newbern, V B
1985-09-01
Nursing educators are beginning to realize that computer literacy has become a survival skill for the profession. They understand that literacy must be at a level that assures the ability to manage and control the flood of available information and provides an openness and awareness of future technologic possibilities. The computer has been on college campuses for a number of years, used primarily for record storage and retrieval. However, early on a few nurse educators saw the potential for its use as a practice tool. Out of this foresight came both formal and nonformal educational offerings. The evolution of formal coursework in computer literacy has moved from learning about the computer to learning with the computer. Today the use of the computer is expanding geometrically as microcomputers become common. Graduate students and faculty use them for literature searches and data analysis. Undergraduates are routinely using computer-assisted instruction. Coursework in computer technology is fast becoming a given for nursing students and computer competency a requisite for faculty. However, inculcating computer competency in faculty and student repertoires is not an easy task. There are problems related to motivation, resources, and control. Territorial disputes between schools and colleges must be arbitrated. The interface with practice must be addressed. The paucity of adequate software is a real concern. But the potential is enormous, probably restricted only by human creativity. The possibilities for teaching and learning are profound, especially if geographical constraints can be effaced and scarce resources can be shared at minimal cost. Extremely sophisticated research designs and evaluation methodologies can be used routinely.(ABSTRACT TRUNCATED AT 250 WORDS)
Computer program analyzes and monitors electrical power systems (POSIMO)
NASA Technical Reports Server (NTRS)
Jaeger, K.
1972-01-01
Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.
Multi-paradigm simulation at nanoscale: Methodology and application to functional carbon material
NASA Astrophysics Data System (ADS)
Su, Haibin
2012-12-01
Multiparadigm methods to span the scales from quantum mechanics to practical issues of functional nanoassembly and nanofabrication are enabling first principles predictions to guide and complement the experimental developments by designing and optimizing computationally the materials compositions and structures to assemble nanoscale systems with the requisite properties. In this talk, we employ multi-paradigm approaches to investigate functional carbon materials with versatile character, including fullerene, carbon nanotube (CNT), graphene, and related hybrid structures, which have already created an enormous impact on next generation nano devices. The topics will cover the reaction dynamics of C60 dimerization and the more challenging complex tubular fullerene formation process in the peapod structures; the computational design of a new generation of peapod nano-oscillators, the predicted magnetic state in Nano Buds; opto-electronic properties of graphene nanoribbons; and disorder / vibronic effects on transport in carbonrich materials.
Genetic networks and soft computing.
Mitra, Sushmita; Das, Ranajit; Hayashi, Yoichi
2011-01-01
The analysis of gene regulatory networks provides enormous information on various fundamental cellular processes involving growth, development, hormone secretion, and cellular communication. Their extraction from available gene expression profiles is a challenging problem. Such reverse engineering of genetic networks offers insight into cellular activity toward prediction of adverse effects of new drugs or possible identification of new drug targets. Tasks such as classification, clustering, and feature selection enable efficient mining of knowledge about gene interactions in the form of networks. It is known that biological data is prone to different kinds of noise and ambiguity. Soft computing tools, such as fuzzy sets, evolutionary strategies, and neurocomputing, have been found to be helpful in providing low-cost, acceptable solutions in the presence of various types of uncertainties. In this paper, we survey the role of these soft methodologies and their hybridizations, for the purpose of generating genetic networks.
RACORO Extended-Term Aircraft Observations of Boundary-Layer Clouds
NASA Technical Reports Server (NTRS)
Vogelmann, Andrew M.; McFarquhar, Greg M.; Ogren, John A.; Turner, David D.; Comstock, Jennifer M.; Feingold, Graham; Long, Charles N.; Jonsson, Haflidi H.; Bucholtz, Anthony; Collins, Don R.;
2012-01-01
Small boundary-layer clouds are ubiquitous over many parts of the globe and strongly influence the Earths radiative energy balance. However, our understanding of these clouds is insufficient to solve pressing scientific problems. For example, cloud feedback represents the largest uncertainty amongst all climate feedbacks in general circulation models (GCM). Several issues complicate understanding boundary-layer clouds and simulating them in GCMs. The high spatial variability of boundary-layer clouds poses an enormous computational challenge, since their horizontal dimensions and internal variability occur at spatial scales much finer than the computational grids used in GCMs. Aerosol-cloud interactions further complicate boundary-layer cloud measurement and simulation. Additionally, aerosols influence processes such as precipitation and cloud lifetime. An added complication is that at small scales (order meters to 10s of meters) distinguishing cloud from aerosol is increasingly difficult, due to the effects of aerosol humidification, cloud fragments and photon scattering between clouds.
OCIS: 15 years' experience with patient-centered computing.
Enterline, J P; Lenhard, R E; Blum, B I; Majidi, F M; Stuart, G J
1994-01-01
In the mid-1970s, the medical and administrative staff of the Oncology Center at Johns Hopkins Hospital recognized a need for a computer-based clinical decision-support system that organized patients' information according to the care continuum, rather than as a series of event-specific data. This is especially important in cancer patients, because of the long periods in which they receive complex medical treatment and the enormous amounts of data generated by extremely ill patients with multiple interrelated diseases. During development of the Oncology Clinical Information System (OCIS), it became apparent that administrative services, research systems, ancillary functions (such as drug and blood product ordering), and financial processes should be integrated with the basic patient-oriented database. With the structured approach used in applications development, new modules were added as the need for additional functions arose. The system has since been moved to a modern network environment with the capacity for client-server processing.
The Center for Multiscale Plasma Dynamics, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gombosi, Tamas I.
The University of Michigan participated in the joint UCLA/Maryland fusion science center focused on plasma physics problems for which the traditional separation of the dynamics into microscale and macroscale processes breaks down. These processes involve large scale flows and magnetic fields tightly coupled to the small scale, kinetic dynamics of turbulence, particle acceleration and energy cascade. The interaction between these vastly disparate scales controls the evolution of the system. The enormous range of temporal and spatial scales associated with these problems renders direct simulation intractable even in computations that use the largest existing parallel computers. Our efforts focused on twomore » main problems: the development of Hall MHD solvers on solution adaptive grids and the development of solution adaptive grids using generalized coordinates so that the proper geometry of inertial confinement can be taken into account and efficient refinement strategies can be obtained.« less
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro
2017-08-01
In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.
Balancing computation and communication power in power constrained clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piga, Leonardo; Paul, Indrani; Huang, Wei
Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less
In Defense of the National Labs and Big-Budget Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodwin, J R
2008-07-29
The purpose of this paper is to present the unofficial and unsanctioned opinions of a Visiting Scientist at Lawrence Livermore National Laboratory on the values of LLNL and the other National Labs. The basic founding value and goal of the National Labs is big-budget scientific research, along with smaller-budget scientific research that cannot easily be done elsewhere. The most important example in the latter category is classified defense-related research. The historical guiding light here is the Manhattan Project. This endeavor was unique in human history, and might remain so. The scientific expertise and wealth of an entire nation was tappedmore » in a project that was huge beyond reckoning, with no advance guarantee of success. It was in many respects a clash of scientific titans, with a large supporting cast, collaborating toward a single well-defined goal. Never had scientists received so much respect, so much money, and so much intellectual freedom to pursue scientific progress. And never was the gap between theory and implementation so rapidly narrowed, with results that changed the world, completely. Enormous resources are spent at the national or international level on large-scale scientific projects. LLNL has the most powerful computer in the world, Blue Gene/L. (Oops, Los Alamos just seized the title with Roadrunner; such titles regularly change hands.) LLNL also has the largest laser in the world, the National Ignition Facility (NIF). Lawrence Berkeley National Lab (LBNL) has the most powerful microscope in the world. Not only is it beyond the resources of most large corporations to make such expenditures, but the risk exceeds the possible rewards for those corporations that could. Nor can most small countries afford to finance large scientific projects, and not even the richest can afford largess, especially if Congress is under major budget pressure. Some big-budget research efforts are funded by international consortiums, such as the Large Hadron Collider (LHC) at CERN, and the International Tokamak Experimental Reactor (ITER) in Cadarache, France, a magnetic-confinement fusion research project. The postWWII histories of particle and fusion physics contain remarkable examples of both international competition, with an emphasis on secrecy, and international cooperation, with an emphasis on shared knowledge and resources. Initiatives to share sometimes came from surprising directions. Most large-scale scientific projects have potential defense applications. NIF certainly does; it is primarily designed to create small-scale fusion explosions. Blue Gene/L operates in part in service to NIF, and in part to various defense projects. The most important defense projects include stewardship of the national nuclear weapons stockpile, and the proposed redesign and replacement of those weapons with fewer, safer, more reliable, longer-lived, and less apocalyptic warheads. Many well-meaning people will consider the optimal lifetime of a nuclear weapon to be zero, but most thoughtful people, when asked how much longer they think this nation will require them, will ask for some time to think. NIF is also designed to create exothermic small-scale fusion explosions. The malapropos 'exothermic' here is a convenience to cover a profusion of complexities, but the basic idea is that the explosions will create more recoverable energy than was used to create them. One can hope that the primary future benefits of success for NIF will be in cost-effective generation of electrical power through controlled small-scale fusion reactions, rather than in improved large-scale fusion explosions. Blue Gene/L also services climate research, genomic research, materials research, and a myriad of other computational problems that become more feasible, reliable, and precise the larger the number of computational nodes employed. Blue Gene/L has to be sited within a security complex for obvious reasons, but its value extends to the nation and the world. There is a duality here between large-scale scientific research machines and the supercomputers used to model them. An astounding example is illustrated in a graph released by EFDAJET, at Oxfordshire, UK, presently the largest operating magnetic-confinement fusion experiment. The graph shows plasma confinement times (an essential performance parameter) for all the major tokamaks in the international fusion program, over their existing lifetimes. The remarkable thing about the data is not so much confinement-time versus date or scale, but the fact that the data are given for both the computer model predictions and the actual experimental measurements, and the two are in phenomenal agreement over the extended range of scales. Supercomputer models, sometimes operating with the intricacy of Schroedinger's equation at quantum physical scales, have become a costly but enormously cost-saving tool.« less
NASA Astrophysics Data System (ADS)
Volkov, A.; Aristova, A.
2017-06-01
Recently megalopolises have become centres of economy development worldwide. Gradual growth in energy consumption and thereafter - enormous power production and delivery to sustain metropolis’ needs entailed, rapid increase in emissions of hazardous substances in quantities, no longer tolerable for secure residence in majority of these cities. Ekaterinburg, is one of them. In order to abridge harmful pollution in Ekaterinburg and further centralize economic importance of the city, this paper proposes to implement the concept of urban sustainable development/ref. / by introducing alternative energy sources, which would progressively displace traditional fossil fuels. A number of actual cases, where the concept was successfully implemented, were studied and analysed to demonstrate how different shares of renewables can become effective substitutes to conventional energy sources in the cities strongly dependent on them: 1. Energy strategy of Pecs (Hungary); 2. International low carbon city (ILCC) project (Shenzhen, China); 3. Electric power system template of Tangshan city (China). Further, regional environmental and economic specifics of Ekaterinburg were studied to understand power consumption needs and energy generation possibilities, which led authors to conclude on the alternative energy sources feasibility, plot specific flow chart for RES implementation in Ekaterinburg’s power network and outline recommendations for future works.
Ship Trim Optimization: Assessment of Influence of Trim on Resistance of MOERI Container Ship
Duan, Wenyang
2014-01-01
Environmental issues and rising fuel prices necessitate better energy efficiency in all sectors. Shipping industry is a stakeholder in environmental issues. Shipping industry is responsible for approximately 3% of global CO2 emissions, 14-15% of global NOX emissions, and 16% of global SOX emissions. Ship trim optimization has gained enormous momentum in recent years being an effective operational measure for better energy efficiency to reduce emissions. Ship trim optimization analysis has traditionally been done through tow-tank testing for a specific hullform. Computational techniques are increasingly popular in ship hydrodynamics applications. The purpose of this study is to present MOERI container ship (KCS) hull trim optimization by employing computational methods. KCS hull total resistances and trim and sinkage computed values, in even keel condition, are compared with experimental values and found in reasonable agreement. The agreement validates that mesh, boundary conditions, and solution techniques are correct. The same mesh, boundary conditions, and solution techniques are used to obtain resistance values in different trim conditions at Fn = 0.2274. Based on attained results, optimum trim is suggested. This research serves as foundation for employing computational techniques for ship trim optimization. PMID:24578649
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
NASA Technical Reports Server (NTRS)
Albert, Stephen L.; Spencer, Jeffrey B.
1994-01-01
'THE VERTICAL' computer keyboard is designed to address critical factors which contribute to Repetitive Motion Injuries (RMI) (including Carpal Tunnel Syndrome) in association with computer keyboard usage. This keyboard splits the standard QWERTY design into two halves and positions each half 90 degrees from the desk. In order to access a computer correctly. 'THE VERTICAL' requires users to position their bodies in optimal alignment with the keyboard. The orthopaedically neutral forearm position (with hands palms-in and thumbs-up) reduces nerve compression in the forearm. The vertically arranged keypad halves ameliorate onset occurrence of keyboard-associated RMI. By utilizing visually-reference mirrored mylar surfaces adjustable to the user's eye, the user is able to readily reference any key indicia (reversed) just as they would on a conventional keyboard. Transverse adjustability substantially reduces cumulative musculoskeletal discomfort in the shoulders. 'THE VERTICAL' eliminates the need for an exterior mouse by offering a convenient finger-accessible curser control while the hands remain in the vertically neutral position. The potential commercial application for 'THE VERTICAL' is enormous since the product can effect every person who uses a computer anywhere in the world. Employers and their insurance carriers are spending hundreds of millions of dollars per year as a result of RMI. This keyboard will reduce the risk.
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Zamanyan, Alen; Torri, Federica; Macciardi, Fabio; Hobel, Sam; Moon, Seok Woo; Sung, Young Hee; Jiang, Zhiguo; Labus, Jennifer; Kurth, Florian; Ashe-McNalley, Cody; Mayer, Emeran; Vespa, Paul M.; Van Horn, John D.; Toga, Arthur W.
2013-01-01
The volume, diversity and velocity of biomedical data are exponentially increasing providing petabytes of new neuroimaging and genetics data every year. At the same time, tens-of-thousands of computational algorithms are developed and reported in the literature along with thousands of software tools and services. Users demand intuitive, quick and platform-agnostic access to data, software tools, and infrastructure from millions of hardware devices. This explosion of information, scientific techniques, computational models, and technological advances leads to enormous challenges in data analysis, evidence-based biomedical inference and reproducibility of findings. The Pipeline workflow environment provides a crowd-based distributed solution for consistent management of these heterogeneous resources. The Pipeline allows multiple (local) clients and (remote) servers to connect, exchange protocols, control the execution, monitor the states of different tools or hardware, and share complete protocols as portable XML workflows. In this paper, we demonstrate several advanced computational neuroimaging and genetics case-studies, and end-to-end pipeline solutions. These are implemented as graphical workflow protocols in the context of analyzing imaging (sMRI, fMRI, DTI), phenotypic (demographic, clinical), and genetic (SNP) data. PMID:23975276
Exploiting Analytics Techniques in CMS Computing Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster formore » further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.« less
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
The Fundamentals and Status of Nuclear Power
NASA Astrophysics Data System (ADS)
Matzie, Regis A.
2011-11-01
Nuclear power has enormous potential to provide clean, safe base-load electricity to the world's growing population. Harnessing this potential in an economic and responsible manner is not without challenges. Safety remains the principal tenet of our operating fleet, which currently provides ˜20% of U.S. electricity generated. The performance of this fleet from economic and safety standpoints has improved dramatically over the past several decades. This nuclear generation also represents greater than 70% of the emission free electricity with hydroelectric power providing the majority of the remainder. There have been many lessons learned from the more than 50 years of experience with nuclear power and these have been factored into the new designs now being constructed worldwide. These new designs, which have enhanced safety compared to the operating fleet, have been simplified by employing passive safety systems and modular construction. There are applications for licenses of more than 20 new reactors under review by the U.S. Nuclear Regulatory Commission; the first of these licenses will be completed in early 2012, and the first new U.S. reactor will start operating in 2016. Yet there are still more improvements that can be made and these are being pursued to achieve an even greater deployment of nuclear power technology.
System-wide power management control via clock distribution network
Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.
2015-05-19
An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.
Reducing power consumption while performing collective operations on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2011-10-18
Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.
Enhancements in medicine by integrating content based image retrieval in computer-aided diagnosis
NASA Astrophysics Data System (ADS)
Aggarwal, Preeti; Sardana, H. K.
2010-02-01
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With cad, radiologists use the computer output as a "second opinion" and make the final decisions. Retrieving images is a useful tool to help radiologist to check medical image and diagnosis. The impact of contentbased access to medical images is frequently reported but existing systems are designed for only a particular context of diagnosis. The challenge in medical informatics is to develop tools for analyzing the content of medical images and to represent them in a way that can be efficiently searched and compared by the physicians. CAD is a concept established by taking into account equally the roles of physicians and computers. To build a successful computer aided diagnostic system, all the relevant technologies, especially retrieval need to be integrated in such a manner that should provide effective and efficient pre-diagnosed cases with proven pathology for the current case at the right time. In this paper, it is suggested that integration of content-based image retrieval (CBIR) in cad can bring enormous results in medicine especially in diagnosis. This approach is also compared with other approaches by highlighting its advantages over those approaches.
Semantic Interoperability for Computational Mineralogy: Experiences of the eMinerals Consortium
NASA Astrophysics Data System (ADS)
Walker, A. M.; White, T. O.; Dove, M. T.; Bruin, R. P.; Couch, P. A.; Tyer, R. P.
2006-12-01
The use of atomic scale computer simulation of minerals to obtain information for geophysics and environmental science has grown enormously over the past couple of decades. It is now routine to probe mineral behavior in the Earth's deep interior and in the surface environment by borrowing methods and simulation codes from computational chemistry and physics. It is becoming increasingly important to use methods embodied in more than one of these codes to solve any single scientific problem. However, scientific codes are rarely designed for easy interoperability and data exchange; data formats are often code-specific, poorly documented and fragile, liable to frequent change between software versions, and even compiler versions. This means that the scientist's simple desire to use the methodological approaches offered by multiple codes is frustrated, and even the sharing of data between collaborators becomes fraught with difficulties. The eMinerals consortium was formed in the early stages of the UK eScience program with the aim of developing the tools needed to apply atomic scale simulation to environmental problems in a grid-enabled world, and to harness the computational power offered by grid technologies to address some outstanding mineralogical problems. One example of the kind of problem we can tackle is the origin of the compressibility anomaly in silica glass. By passing data directly between simulation and analysis tools we were able to probe this effect in more detail than has previously been possible and have shown how the anomaly is related to the details of the amorphous structure. In order to approach this kind of problem we have constructed a mini-grid, a small scale and extensible combined compute- and data-grid that allows the execution of many calculations in parallel, and the transparent storage of semantically-rich marked-up result data. Importantly, we automatically capture multiple kinds of metadata and key results from each calculation. We believe that the lessons learned and tools developed will be useful in many areas of science beyond the computational mineralogy. Key tools that will be described include: a pure Fortran XML library (FoX) that presents XPath, SAX and DOM interfaces as well as permitting the easy production of valid XML from legacy Fortran programs; a job submission framework that automatically schedules calculations to remote grid resources, handles data staging and metadata capture; and a tool (AgentX) that map concepts from an ontology onto locations in documents of various formats that we use to enable data exchange.
NASA Astrophysics Data System (ADS)
Almasoudi, Fahad M.; Alatawi, Khaled S.; Matin, Mohammad
2016-09-01
The development of Wide band gap (WBG) power devices has been attracted by many commercial companies to be available in the market because of their enormous advantages over the traditional Si power devices. An example of WBG material is SiC, which offers a number of advantages over Si material. For example, SiC has the ability of blocking higher voltages, reducing switching and conduction losses and supports high switching frequency. Consequently, SiC power devices have become the affordable choice for high frequency and power application. The goal of this paper is to study the performance of 4.5 kW, 200 kHz, 600V DC-DC boost converter operating in continuous conduction mode (CCM) for PV applications. The switching behavior and turn on and turn off losses of different switching power devices such as SiC MOSFET, SiC normally ON JFET and Si MOSFET are investigated and analyzed. Moreover, a detailed comparison is provided to show the overall efficiency of the DC-DC boost converter with different switching power devices. It is found that the efficiency of SiC power switching devices are higher than the efficiency of Si-based switching devices due to low switching and conduction losses when operating at high frequencies. According to the result, the performance of SiC switching power devices dominate the conventional Si power devices in terms of low losses, high efficiency and high power density. Accordingly, SiC power switching devices are more appropriate for PV applications where a converter of smaller size with high efficiency, and cost effective is required.
NASA Astrophysics Data System (ADS)
Zubrin, Robert M.
1994-07-01
In the past, most studies dealing with the benefits of space nuclear electric power systems for solar system exploration have focused on the potential of nuclear electric propulsion (NEP) to enhance missions by increasing delivered payload, decreasing LEO mass, or reducing trip time. While important, such mission enhancements have failed to go to the heart of the concerns of the scientific community supporting interplanetary exploration. To put the matter succinctly, scientists don't buy delivered payload - they buy data returned. With nuclear power we can increase both the quantity of data returned, by enormously increasing data communication rates, and the quality of data by enabling a host of active sensing techniques otherwise impossible. These non-propulsive mission enhancement capabilities of space nuclear power have been known in principle for many years, but they have not been adequately documented. As a result, support for the development of space nuclear power by the interplanetary exploration community has been much less forceful than it might otherwise be. In this paper we shall present mission designs that take full advantage of the potential mission enhancements offered by space nuclear power systems in the 10 to 100 kWe range, not just for propulsion, but to radically improve, enrich, and expand the science return itself. Missions considered include orbiter missions to each of the outer planets. It will be shown that be using hybrid trajectories combining chemical propulsion with NEP and (in certain cases) gravity assists, that it is possible, using a Titan IV-Centaur launch vehicle, for high-powered spacecraft to be placed in orbit around each of the outer planets with electric propulsion burn times of less than 4 years. Such hybrid trajectories therefore make the outer solar-system available to near-term nuclear electric power systems. Once in orbit, the spacecraft will utilize multi-kilowatt communication systems, similar to those now employed by the U.S. military, to increase data return far beyond that possible utilizing the 40 W rf traveling wave tube antennas that are the current NASA standard. This higher data rate will make possible very high resolution multi-spectral imaging (with high resolutions both spatially and spectrally), a form of science hitherto impossible in the outer solar system. Large numbers of such images could be returned, allowing the creation of motion pictures of atmospheric phenomenon on a small scale and greatly increasing the probability of capturing transient phenomena such as lighting or volcanic activity. The multi-kilowatt power sources on the spacecraft also enables active sensing, including radar, which could be used to do topographic and subsurface studies of clouded bodies such as Titan, ground penetrating sounding of Pluto, the major planet's moons, and planetoids, and topside sounding of the electrically conductive atmospheres of Jupiter, Saturn, Uranus and Neptune to produce profiles of fluid density, conductivity, and horizontal and vertical velocity as a function of depth and global location. Radio science investigations of planetary atmospheres and ring systems would be greatly enhanced by increased transmitter power. The scientific benefits of utilizing such techniques are discussed, and a comparison is made with the quantity and quality of science that a low-powered spacecraft employing RTGs could return. It is concluded that the non-propulsive benefits of nuclear power for spacecraft exploring the outer solar system are enormous, and taken together with the well documented mission enhancements enabled by electric propulsion fully justify the expenditures needed to bring a space qualified nuclear electric power source into being.
Maximizing the science return of interplanetary missions using nuclear electric power
NASA Astrophysics Data System (ADS)
Zubrin, Robert M.
1995-01-01
In the past, most studies dealing with the benefits of space nuclear electric power systems for solar system exploration have focused on the potential of nuclear electric propulsion (NEP) to enhance missions by increasing delivered payload, decreasing LEO mass, or reducing trip time. While important, such mission enhancements have failed to go to the heart of the concerns of the scientific community supporting interplanetary exploration. To put the matter succintly, scientists don't buy delivered payload—they buy data returned. With nuclear power we can increase both the quantity of data returned, by enormously increasing data communication rates, and the quality of data by enabling a host of active sensing techniques otherwise impossible. These non-propulsive mission enhancement capabilities of space nuclear power have been known in principle for many years, but they have not been adequately documented. As a result, support for the development of space nuclear power by the interplanetary exploration community has been much less forceful than it might otherwise be. In this paper we shall present mission designs that take full advantage of the potential mission enhancements offered by space nuclear power systems in the 15 to 30 kWe range, not just for propulsion, but to radically improve, enrich, and expand the science return itself. Missions considered include orbiter missions to each of the outer planets. It will be shown that by using hybrid trajectories combining chemical propulsion with NEP and (in certain cases) gravity assists, that it is possible, using Proton, Tatan III or Titan IV-Centaur launch vehicles, for high-powered spacecraft to be placed in orbit around each of the outer planets with electric propulsion burn times of less than 4 years. Such hybrid trajectories therefore make the outer solar-system available to near-term nuclear electric power systems. Once in orbit, the spacecraft will utilize multi-kilowatt communication systems, similar to those now employed by the U.S. military, to increse data return far beyond that possible utilizing the 40 W rf traveling wave tube antennas that are the current NASA stadard. This higher data rate will make possible very high resolution multi-space imaging (with high resolutions both spatially and spectrally), a form of science hitherto impossible in the outer solar system. Larger numbers of such images could be returned, allowing the creation of motion pictures of atmospheric phenomenon on a small scale and greatly increasing the probability of capturing transient phenomena such as lighting or volcanic activity. The multi-kilowatt power sources on the spaecraft also enables active sensing, including radar, which could be used to do topographic and subsurface studies of clouded bodies such as Titan, ground pentrating sounding of Pluto, the major planet's moons, and planetoids, and topside sounding of the electrically conductive atmospheres of Jupiter, Saturn, Uranus and Neptune to produce profiles of fluid density, conductivity, and horizontal and vertical velocity as a function of depth and global location. Radio science investigations of planetary atmospheres and ring systems would be greatly enhanced by increased transmitter power. The scientific benefits of utilizing such techniques are discussed, and a comparison is made with the quantity and quality of science that a low-powered spacecraft employing RTGs could return. It is concluded that the non-propulsive benefits of nuclear power for spacecraft exploring the outer solar system are enormous, and taken together with the well documented mission enhancements enabled by electric propulsion fully justify the expanditures needed to bring a space qualified nuclear electric power source into being.
Proven and novel strategies for efficient editing of the human genome.
Mussolino, Claudio; Mlambo, Tafadzwa; Cathomen, Toni
2015-10-01
Targeted gene editing with designer nucleases has become increasingly popular. The most commonly used designer nuclease platforms are engineered meganucleases, zinc-finger nucleases, transcription activator-like effector nucleases and the clustered regularly interspaced short palindromic repeat/Cas9 system. These powerful tools have greatly facilitated the generation of plant and animal models for basic research, and harbor an enormous potential for applications in biotechnology and gene therapy. This review recapitulates proven concepts of targeted genome engineering in primary human cells and elaborates on novel concepts that became possible with the dawn of RNA-guided nucleases and RNA-guided transcription factors. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.
2002-01-01
Mars is one of the most fascinating planets in the solar system, featuring an atmosphere, water, and enormous volcanoes and canyons. The Mars Pathfinder, Global Surveyor, and Odyssey missions mark the first wave of the Planet Earth's coming invasion of the red planet, changing our views of the past and future of the planet and the possibilities of life. Scientist and science-fiction writer Geoffrey A. Landis will present experiences on the Pathfinder mission, the challenges of using solar power on the surface of Mars, and present future missions to Mars such as the upcoming Mars Twin Rovers, which will launch two highly-capable vehicles in 2003 to explore the surface of Mars.
An update on Lab Rover: A hospital material transporter
NASA Technical Reports Server (NTRS)
Mattaboni, Paul
1994-01-01
The development of a hospital material transporter, 'Lab Rover', is described. Conventional material transport now utilizes people power, push carts, pneumatic tubes and tracked vehicles. Hospitals are faced with enormous pressure to reduce operating costs. Cyberotics, Inc. developed an Autonomous Intelligent Vehicle (AIV). This battery operated service robot was designed specifically for health care institutions. Applications for the AIV include distribution of clinical lab samples, pharmacy drugs, administrative records, x-ray distribution, meal tray delivery, and certain emergency room applications. The first AIV was installed at Lahey Clinic in Burlington, Mass. Lab Rover was beta tested for one year and has been 'on line' for an additional 2 years.
Gur, Ilan
2018-01-16
An overview presentation about ARPA-E's AMPED program. AMPED projects seek to develop advanced sensing, control, and power management technologies that redefine the way we think about battery management. Energy storage can significantly improve U.S. energy independence, efficiency, and security by enabling a new generation of electric vehicles. While rapid progress is being made in new battery materials and storage technologies, few innovations have emerged in the management of advanced battery systems. AMPED aims to unlock enormous untapped potential in the performance, safety, and lifetime of today's commercial battery systems exclusively through system-level innovations, and is thus distinct from existing efforts to enhance underlying battery materials and architectures.
Recent updates of marine antimicrobial peptides.
Semreen, Mohammad H; El-Gamal, Mohammed I; Abdin, Shifaa; Alkhazraji, Hajar; Kamal, Leena; Hammad, Saba; El-Awady, Faten; Waleed, Dima; Kourbaj, Layal
2018-03-01
Antimicrobial peptides are group of proteins showing broad-spectrum antimicrobial activity that have been known to be powerful agents against a variety of pathogens. This class of compounds contributed to solving the microbial resistance dilemma that limited the use of many potent antimicrobial agents. The marine environment is known to be one of the richest sources for antimicrobial peptides, yet this environment is not fully explored. Hence, the scientific research attention should be directed toward the marine ecosystem as enormous amount of useful discoveries could be brought to the forefront. In the current article, the marine antimicrobial peptides reported from mid 2012 to 2017 have been reviewed.
Hinode Takes an X-Ray of a Powerful Solar Flare
2017-09-10
On Sept. 10, 2017, the Hinode satellite observed an enormous X-class flare burst from an active region on the western edge of the Sun. The video shows the high-energy flare as seen by Hinode's X-Ray Telescope. The emission was so bright that the initial blast caused the detector to saturate. The giant explosion sent a huge cloud of superhot plasma zooming into interplanetary space -- a phenomenon known as a coronal mass ejection. Studying large flares like this one with a variety of instruments is key to understanding exactly what causes these dramatic eruptions, and one day predicting them before they occur.
Marshal Wrubel and the Electronic Computer as an Astronomical Instrument
NASA Astrophysics Data System (ADS)
Mutschlecner, J. P.; Olsen, K. H.
1998-05-01
In 1960, Marshal H. Wrubel, professor of astrophysics at Indiana University, published an influential review paper under the title, "The Electronic Computer as an Astronomical Instrument." This essay pointed out the enormous potential of the electronic computer as an instrument of observational and theoretical research in astronomy, illustrated programming concepts, and made specific recommendations for the increased use of computers in astronomy. He noted that, with a few scattered exceptions, computer use by the astronomical community had heretofore been "timid and sporadic." This situation was to improve dramatically in the next few years. By the late 1950s, general-purpose, high-speed, "mainframe" computers were just emerging from the experimental, developmental stage, but few were affordable by or available to academic and research institutions not closely associated with large industrial or national defense programs. Yet by 1960 Wrubel had spent a decade actively pioneering and promoting the imaginative application of electronic computation within the astronomical community. Astronomy upper-level undergraduate and graduate students at Indiana were introduced to computing, and Ph.D. candidates who he supervised applied computer techniques to problems in theoretical astrophysics. He wrote an early textbook on programming, taught programming classes, and helped establish and direct the Research Computing Center at Indiana, later named the Wrubel Computing Center in his honor. He and his students created a variety of algorithms and subroutines and exchanged these throughout the astronomical community by distributing the Astronomical Computation News Letter. Nationally as well as internationally, Wrubel actively cooperated with other groups interested in computing applications for theoretical astrophysics, often through his position as secretary of the IAU commission on Stellar Constitution.
Ectoplasm, ghost in the R cell machine?
Xia, Hongai; Ready, Donald F
2011-12-01
Drosophila photoreceptors (R cells) are an extreme instance of sensory membrane amplification via apical microvilli, a widely deployed and deeply conserved operation of polarized epithelial cells. Developmental rotation of R cell apices aligns rhabdomere microvilli across the optical axis and enables enormous membrane expansion in a new, proximal distal dimension. R cell ectoplasm, the specialized cortical cytoplasm abutting the rhabdomere is likewise enormously amplified. Ectoplasm is dominated by the actin-rich terminal web, a conserved operational domain of the ancient vesicle-transport motor, Myosin V. R cells harness Myosin V to move two distinct cargoes, the biosynthetic traffic that builds the rhabdomere during development, and the migration of pigment granules that mediates the adaptive "longitudinal pupil" in adults, using two distinct Rab proteins. Ectoplasm further shapes a distinct cortical endosome compartment, the subrhabdomeral cisterna (SRC), vital to normal cell function. Reticulon, a protein that promotes endomembrane curvature, marks the SRC. R cell visual arrestin 2 (Arr2) is predominantly cytoplasmic in dark-adapted photoreceptors but on illumination it translocates to the rhabdomere, where it quenches ongoing photosignaling by binding to activated metarhodopsin. Arr2 translocation is "powered" by diffusion; a motor is not required to move Arr2 and ectoplasm does not obstruct its rapid diffusion to the rhabdomere. Copyright © 2011 Wiley Periodicals, Inc.
Simulation Data as Data Streams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdulla, G; Arrighi, W; Critchlow, T
2003-11-18
Computational or scientific simulations are increasingly being applied to solve a variety of scientific problems. Domains such as astrophysics, engineering, chemistry, biology, and environmental studies are benefiting from this important capability. Simulations, however, produce enormous amounts of data that need to be analyzed and understood. In this overview paper, we describe scientific simulation data, its characteristics, and the way scientists generate and use the data. We then compare and contrast simulation data to data streams. Finally, we describe our approach to analyzing simulation data, present the AQSim (Ad-hoc Queries for Simulation data) system, and discuss some of the challenges thatmore » result from handling this kind of data.« less
Experiences with hypercube operating system instrumentation
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Rudolph, David C.
1989-01-01
The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.
The large-scale structure of the Universe.
Springel, Volker; Frenk, Carlos S; White, Simon D M
2006-04-27
Research over the past 25 years has led to the view that the rich tapestry of present-day cosmic structure arose during the first instants of creation, where weak ripples were imposed on the otherwise uniform and rapidly expanding primordial soup. Over 14 billion years of evolution, these ripples have been amplified to enormous proportions by gravitational forces, producing ever-growing concentrations of dark matter in which ordinary gases cool, condense and fragment to make galaxies. This process can be faithfully mimicked in large computer simulations, and tested by observations that probe the history of the Universe starting from just 400,000 years after the Big Bang.
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Obousy, R. K.
2012-09-01
Sending a mission to distant stars will require our civilization to develop new technologies and change the way we live. The complexity of the task is enormous [1] thus, the thought is to involve people from around the globe through the ``citizen scientist'' paradigm. The suggestion is a ``Gaming Virtual Reality Network'' (GVRN) to simulate sociological and technological aspects involved in this project. Currently there is work being done [2] in developing a technology which will construct computer games within GVRN. This technology will provide quick and easy ways for individuals to develop game scenarios related to various aspects of the ``100YSS'' project. People will be involved in solving certain tasks just by play games. Players will be able to modify conditions, add new technologies, geological conditions, social movements and assemble new strategies just by writing scenarios. The system will interface with textual and video information, extract scenarios written in millions of texts and use it to assemble new games. Thus, players will be able to simulate enormous amounts of possibilities. Information technologies will be involved which will require us to start building the system in a way that any modules can be easily replaced. Thus, GVRN should be modular and open to the community.
NASA Astrophysics Data System (ADS)
Ab Razak, Mohd Zulhakimi; Saleh, Zatul Saliza; Ahmad, Fauzan; Anyi, Carol Livan; Harun, Sulaiman W.; Arof, Hamzah
2016-10-01
Due to an enormous potential of pulsed lasers in applications such as manufacturing, metrology, environmental sensing, and biomedical diagnostics, a high-power and stable Q-switched erbium-ytterbium codoped double-clad fiber laser (EYDFL) incorporating of multiwall carbon nanotubes (MWCNTs) saturable absorber (SA) made based on polyvinyl alcohol (PVA) with a 3∶2 ratio is demonstrated. The SA was fabricated by mixing a dilute PVA solution with an MWCNTs homogeneous solution. Subsequently, the mixture was sonicated and centrifuged to produce a homogeneous suspension that was left to dry at room temperature to form the MWCNTs-PVA film. The SA was formed by inserting the film between a pair of FC/PC fiber connectors. Then, it was integrated into the EYDFL's ring cavity, which uses a 5-m-long erbium-ytterbium codoped fiber (EYDF). The lasing threshold for the Q-switched EYDFL was at 330 mW. At the maximum available pump power of 900 mW, the proposed EYDFL produced Q-switched pulses with a repetition rate of 74.85 kHz, pulsewidth of ˜3.6 μs, and an average output power of about 5 mW. The maximum energy per pulse of ˜85 nJ was obtained at pump power of ˜700 mW with peak power of 21 mW.
Enormous disc of cool gas surrounding the nearby powerful radio galaxy NGC612 (PKS0131-36)
NASA Astrophysics Data System (ADS)
Emonts, B. H. C.; Morganti, R.; Oosterloo, T. A.; Holt, J.; Tadhunter, C. N.; van der Hulst, J. M.; Ojha, R.; Sadler, E. M.
2008-06-01
We present the detection of an enormous disc of cool neutral hydrogen (HI) gas surrounding the S0 galaxy NGC612, which hosts one of the nearest powerful radio sources (PKS0131-36). Using the Australia Telescope Compact Array, we detect MHI = 1.8 × 109Msolar of HI emission-line gas that is distributed in a 140-kpc wide disc-like structure along the optical disc and dust lane of NGC612. The bulk of the gas in the disc appears to be settled in regular rotation with a total velocity range of 850kms-1, although asymmetries in this disc indicate that perturbations are being exerted on part of the gas, possibly by a number of nearby companions. The HI disc in NGC612 suggests that the total mass enclosed by the system is Menc ~ 2.9 × 1012 sin-2 iMsolar, implying that this early-type galaxy contains a massive dark matter halo. We also discuss an earlier study by Holt et al. that revealed the presence of a prominent young stellar population at various locations throughout the disc of NGC612, indicating that this is a rare example of an extended radio source that is hosted by a galaxy with a large-scale star-forming disc. In addition, we map a faint HI bridge along a distance of 400kpc in between NGC612 and the gas-rich (MHI = 8.9 × 109Msolar) barred galaxy NGC619, indicating that likely an interaction between both systems occurred. From the unusual amounts of HI gas and young stars in this early-type galaxy, in combination with the detection of a faint optical shell and the system's high infrared luminosity, we argue that either ongoing or past galaxy interactions or a major merger event are a likely mechanism for the triggering of the radio source in NGC612. This paper is part of an ongoing study to map the large-scale neutral hydrogen properties of nearby radio galaxies and it presents the first example of large-scale HI detected around a powerful Fanaroff-Riley type II (FR-II) radio galaxy. The HI properties of the FR-II radio galaxy NGC612 are very similar to those found for low-power compact radio sources, but different from those of extended Fanaroff-Riley type I (FR-I) sources.
Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil
2013-08-15
Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.
Atuonwu, J C; Tassou, S A
2018-01-23
The enormous magnitude and variety of microwave applications in household, commercial and industrial food processing creates a strong motivation for improving the energy efficiency and hence, sustainability of the process. This review critically assesses key energy issues associated with microwave food processing, focusing on previous energy performance studies, energy performance metrics, standards and regulations. Factors affecting energy-efficiency are categorised into source, load and source-load matching factors. This highlights the need for highly-flexible and controllable power sources capable of receiving real-time feedback on load properties, and effecting rapid control actions to minimise reflections, heating non-uniformities and other imperfections that lead to energy losses. A case is made for the use of solid-state amplifiers as alternatives to conventional power sources, magnetrons. By a full-scale techno-economic analysis, including energy aspects, it is shown that the use of solid-state amplifiers as replacements to magnetrons is promising, not only from an energy and overall technical perspective, but also in terms of economics.
Tools and procedures for visualization of proteins and other biomolecules.
Pan, Lurong; Aller, Stephen G
2015-04-01
Protein, peptides, and nucleic acids are biomolecules that drive biological processes in living organisms. An enormous amount of structural data for a large number of these biomolecules has been described with atomic precision in the form of structural "snapshots" that are freely available in public repositories. These snapshots can help explain how the biomolecules function, the nature of interactions between multi-molecular complexes, and even how small-molecule drugs can modulate the biomolecules for clinical benefits. Furthermore, these structural snapshots serve as inputs for sophisticated computer simulations to turn the biomolecules into moving, "breathing" molecular machines for understanding their dynamic properties in real-time computer simulations. In order for the researcher to take advantage of such a wealth of structural data, it is necessary to gain competency in the use of computer molecular visualization tools for exploring the structures and visualizing three-dimensional spatial representations. Here, we present protocols for using two common visualization tools--the Web-based Jmol and the stand-alone PyMOL package--as well as a few examples of other popular tools. Copyright © 2015 John Wiley & Sons, Inc.
If we designed airplanes like we design drugs…
NASA Astrophysics Data System (ADS)
Woltosz, Walter S.
2012-01-01
In the early days, airplanes were put together with parts designed for other purposes (bicycles, farm equipment, textiles, automotive equipment, etc.). They were then flown by their brave designers to see if the design would work—often with disastrous results. Today, airplanes, helicopters, missiles, and rockets are designed in computers in a process that involves iterating through enormous numbers of designs before anything is made. Until very recently, novel drug-like molecules were nearly always made first like early airplanes, then tested to see if they were any good (although usually not on the brave scientists who created them!). The resulting extremely high failure rate is legendary. This article describes some of the evolution of computer-based design in the aerospace industry and compares it with the progress made to date in computer-aided drug design. Software development for pharmaceutical research has been largely entrepreneurial, with only relatively limited support from government and industry end-user organizations. The pharmaceutical industry is still about 30 years behind aerospace and other industries in fully recognizing the value of simulation and modeling and funding the development of the tools needed to catch up.
An evaluation of exact methods for the multiple subset maximum cardinality selection problem.
Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas
2016-05-01
The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.
If we designed airplanes like we design drugs....
Woltosz, Walter S
2012-01-01
In the early days, airplanes were put together with parts designed for other purposes (bicycles, farm equipment, textiles, automotive equipment, etc.). They were then flown by their brave designers to see if the design would work--often with disastrous results. Today, airplanes, helicopters, missiles, and rockets are designed in computers in a process that involves iterating through enormous numbers of designs before anything is made. Until very recently, novel drug-like molecules were nearly always made first like early airplanes, then tested to see if they were any good (although usually not on the brave scientists who created them!). The resulting extremely high failure rate is legendary. This article describes some of the evolution of computer-based design in the aerospace industry and compares it with the progress made to date in computer-aided drug design. Software development for pharmaceutical research has been largely entrepreneurial, with only relatively limited support from government and industry end-user organizations. The pharmaceutical industry is still about 30 years behind aerospace and other industries in fully recognizing the value of simulation and modeling and funding the development of the tools needed to catch up.
NASA Technical Reports Server (NTRS)
Wang, Qun-Zhen; Cash, Steve (Technical Monitor)
2002-01-01
It is very important to accurately predict the gas pressure, gas and solid temperature, as well as the amount of O-ring erosion inside the space shuttle Reusable Solid Rocket Motor (RSRM) joints in the event of a leak path. The scenarios considered are typically hot combustion gas rapid pressurization events of small volumes through narrow and restricted flow paths. The ideal method for this prediction is a transient three-dimensional computational fluid dynamics (CFD) simulation with a computational domain including both combustion gas and surrounding solid regions. However, this has not yet been demonstrated to be economical for this application due to the enormous amount of CPU time and memory resulting from the relatively long fill time as well as the large pressure and temperature rising rate. Consequently, all CFD applications in RSRM joints so far are steady-state simulations with solid regions being excluded from the computational domain by assuming either a constant wall temperature or no heat transfer between the hot combustion gas and cool solid walls.
Integrative computational approach for genome-based study of microbial lipid-degrading enzymes.
Vorapreeda, Tayvich; Thammarongtham, Chinae; Laoteng, Kobkul
2016-07-01
Lipid-degrading or lipolytic enzymes have gained enormous attention in academic and industrial sectors. Several efforts are underway to discover new lipase enzymes from a variety of microorganisms with particular catalytic properties to be used for extensive applications. In addition, various tools and strategies have been implemented to unravel the functional relevance of the versatile lipid-degrading enzymes for special purposes. This review highlights the study of microbial lipid-degrading enzymes through an integrative computational approach. The identification of putative lipase genes from microbial genomes and metagenomic libraries using homology-based mining is discussed, with an emphasis on sequence analysis of conserved motifs and enzyme topology. Molecular modelling of three-dimensional structure on the basis of sequence similarity is shown to be a potential approach for exploring the structural and functional relationships of candidate lipase enzymes. The perspectives on a discriminative framework of cutting-edge tools and technologies, including bioinformatics, computational biology, functional genomics and functional proteomics, intended to facilitate rapid progress in understanding lipolysis mechanism and to discover novel lipid-degrading enzymes of microorganisms are discussed.
Stereo Correspondence Using Moment Invariants
NASA Astrophysics Data System (ADS)
Premaratne, Prashan; Safaei, Farzad
Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.
Towards implementation of cellular automata in Microbial Fuel Cells.
Tsompanas, Michail-Antisthenis I; Adamatzky, Andrew; Sirakoulis, Georgios Ch; Greenman, John; Ieropoulos, Ioannis
2017-01-01
The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway's Game of Life as the 'benchmark' CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions-compared to silicon circuitry-between the different states during computation.
Towards implementation of cellular automata in Microbial Fuel Cells
Adamatzky, Andrew; Sirakoulis, Georgios Ch.; Greenman, John; Ieropoulos, Ioannis
2017-01-01
The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway’s Game of Life as the ‘benchmark’ CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions—compared to silicon circuitry—between the different states during computation. PMID:28498871
The Risky Shift Toward Online Activism: Do Hacktivists Pose an Increased Threat to the Homeland?
2014-09-01
Cow and was intended to refer to the use of technology to foster human rights and the open exchange of information.11 The term has since evolved to...Orbit Ion Cannon (LOIC) has become a favorite of the hacktivist group Anonymous. The tool, originally created to perform witting stress tests of...that were generating the enormous heat , the enormous pressure, the enormous growth, and really shaping the political.”249 This failed approach to align
GPS synchronized power system phase angle measurements
NASA Astrophysics Data System (ADS)
Wilson, Robert E.; Sterlina, Patrick S.
1994-09-01
This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
NASA Astrophysics Data System (ADS)
Zack, J. W.
2015-12-01
Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble, which is a case-matching scheme. The presentation will provide (1) an overview of each method and the experimental design, (2) performance comparisons based on standard metrics such as bias, MAE and RMSE, (3) a summary of the performance characteristics of each approach and (4) a preview of further experiments to be conducted.
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Computational Power of Symmetry-Protected Topological Phases.
Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-07
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Computational Power of Symmetry-Protected Topological Phases
NASA Astrophysics Data System (ADS)
Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-01
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Emulating a million machines to investigate botnets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudish, Donald W.
2010-06-01
Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousandsmore » or in some cases even millions of computers, making them among the world's most powerful computers for some applications.« less
Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing
NASA Astrophysics Data System (ADS)
Kim, Mooseop; Ryou, Jaecheol
The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
Solar Power Satellites: Reconsideration as Renewable Energy Source Based on Novel Approaches
NASA Astrophysics Data System (ADS)
Ellery, Alex
2017-04-01
Solar power satellites (SPS) are a solar energy generation mechanism that captures solar energy in space and converts this energy into microwave for transmission to Earth-based rectenna arrays. They offer a constant, high integrated energy density of 200 W/m2 compared to <10 W/m2 for other renewable energy sources. Despite this promise as a clean energy source, SPS have been relegated out of consideration due to their enormous cost and technological challenge. It has been suggested that for solar power satellites to become economically feasible, launch costs must decrease from their current 20,000/kg to <200/kg. Even with the advent of single-stage-to-orbit launchers which propose launch costs dropping to 2,000/kg, this will not be realized. Yet, the advantages of solar power satellites are many including the provision of stable baseload power. Here, I present a novel approach to reduce the specific cost of solar power satellites to 1/kg by leveraging two enabling technologies - in-situ resource utilization of lunar material and 3D printing of this material. Specifically, we demonstrate that electric motors may be constructed from lunar material through 3D printing representing a major step towards the development of self-replicating machines. Such machines have the capacity to build solar power satellites on the Moon, thereby bypassing the launch cost problem. The productive capacity of self-replicating machines favours the adoption of large constellations of small solar power satellites. This opens up additional clean energy options for combating climate change by meeting the demands for future global energy.
NASA Technical Reports Server (NTRS)
Pieters, Carle M.
1992-01-01
Science and technology applications for the Moon have not fully kept pace with technical advancements in sensor development and analytical information extraction capabilities. Appropriate unanswered questions for the Moon abound, but until recently there has been little motivation to link sophisticated technical capabilities with specific measurement and analysis projects. Over the last decade enormous technical progress has been made in the development of (1) CCD photometric array detectors; (2) visible to near-infrared imaging spectrometers; (3)infrared spectroscopy; (4) high-resolution dual-polarization radar imaging at 3.5, 12, and 70 cm; and equally important (5) data analysis and information extraction techniques using compact powerful computers. Parts of each of these have been tested separately, but there has been no programmatic effort to develop and optimize instruments to meet lunar science and resource assessment needs (e.g., specific wavelength range, resolution, etc.) nor to coordinate activities so that the symbiotic relation between different kinds of data can be fully realized. No single type of remotely acquired data completely characterizes the lunar environment, but there has been little opportunity for integration of diverse advanced sensor data for the Moon. Two examples of technology concepts for lunar measurements are given. Using VIS/near-IR spectroscopy, the mineral composition of surface material can be derived from visible and near-infrared radiation reflected from the surface. The surface and subsurface scattering properties of the Moon can be analyzed using radar backscattering imaging.
Patient data management systems in intensive care--the situation in Europe.
Metnitz, P G; Lenz, K
1995-09-01
Computerized Patient Data Management Systems (PDMS) have been developed for handling the enormous increase in data collection in ICUs. This study tries to evaluate the functionality of such systems installed in Europe. Criteria reflecting usefulness and practicality formed the basis of a questionnaire to be answered accurately by the vendors. We then examined functions provided and their implementation in European ICUs. Next, an "Information Delivery Test" evaluated variations in performance, taking questions arising from daily routine work and measured time of information delivery. ICUs located in Vienna (Austria), Antwerp (Belgium), Dortmund (Germany), Kuopio (Finland). 5 PDMS were selected on the basis of our inclusion criteria: commercial availability with at least one installation in Europe, bedside-based design, realization of international standards and a prescribed minimum of functionality. The "Table of Functions" shows an overview of functions and their implementation. "System analyses" indicates predominant differences in properties and functions found between the systems. Results of the "Information Delivery Tests" are shown in the graphic charts. Systems with graphical data presentation have advantages over systems presenting data mainly in numerical format. Time has come to form a medical establishment powerful enough to set standards and thus communicate with industrial partners as well as with hospital management responsible for planning, purchasing and implementing PDMS. Overall, communication between clinicians, nurses, computer scientists and PDMS vendors must be enhanced to achieve the common goal: useful and practical data management systems at ICUs.
Teach Graphic Design Basics with PowerPoint
ERIC Educational Resources Information Center
Lazaros, Edward J.; Spotts, Thomas H.
2007-01-01
While PowerPoint is generally regarded as simply software for creating slide presentations, it includes often overlooked--but powerful--drawing tools. Because it is part of the Microsoft Office package, PowerPoint comes preloaded on many computers and thus is already available in many classrooms. Since most computers are not preloaded with good…
NASA Technical Reports Server (NTRS)
Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.
1974-01-01
The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.
Emissions and temperature benefits: The role of wind power in China.
Duan, Hongbo
2017-01-01
As a non-fossil technology, wind power has an enormous advantage over coal because of its role in climate change mitigation. Therefore, it is important to investigate how substituting wind power for coal-fired electricity will affect emission reductions, changes in radiative forcing and rising temperatures, particularly in the context of emission limits. We developed an integrated methodology that includes two parts: an energy-economy-environmental (3E) integrated model and an emission-temperature response model. The former is used to simulate the dynamic relationships between economic output, wind energy and greenhouse gas (GHG) emissions; the latter is used to evaluate changes in radiative forcing and warming. Under the present development projection, wind energy cannot serve as a major force in curbing emissions, even under the strictest space-restraining scenario. China's temperature contribution to global warming will be up to 21.76% if warming is limited to 2 degrees. With the wind-for-coal power substitution, the corresponding contribution to global radiative forcing increase and temperature rise will decrease by up to 10% and 6.57%, respectively. Substituting wind power for coal-fired electricity has positive effects on emission reductions and warming control. However, wind energy alone is insufficient for climate change mitigation. It forms an important component of the renewable energy portfolio used to combat global warming. Copyright © 2016 Elsevier Inc. All rights reserved.
Environmental Pollution and Health
Enormous progress has been made in identifying chemicals in the environment that adversely affect human health. The environment is cleaner, and, partly as a result, people are living longer and healthier lives. Major uncertainties remain, however, regarding the enormous number o...
On heat loading, novel divertors, and fusion reactors
NASA Astrophysics Data System (ADS)
Kotschenreuther, M.; Valanju, P. M.; Mahajan, S. M.; Wiley, J. C.
2007-07-01
The limited thermal power handling capacity of the standard divertors (used in current as well as projected tokamaks) is likely to force extremely high (˜90%) radiation fractions frad in tokamak fusion reactors that have heating powers considerably larger than ITER [D. J. Campbell, Phys. Plasmas 8, 2041 (2001)]. Such enormous values of necessary frad could have serious and debilitating consequences on the core confinement, stability, and dependability for a fusion power reactor, especially in reactors with Internal Transport Barriers. A new class of divertors, called X-divertors (XD), which considerably enhance the divertor thermal capacity through a flaring of the field lines only near the divertor plates, may be necessary and sufficient to overcome these problems and lead to a dependable fusion power reactor with acceptable economics. X-divertors will lower the bar on the necessary confinement to bring it in the range of the present experimental results. Its ability to reduce the radiative burden imparts the X-divertor with a key advantage. Lower radiation demands allow sharply peaked density profiles that enhance the bootstrap fraction creating the possibility for a highly increased beta for the same beta normal discharges. The X-divertor emerges as a beta-enhancer capable of raising it by up to roughly a factor of 2.
Wang, Sen; Wu, Zhong-Shuai; Zheng, Shuanghao; Zhou, Feng; Sun, Chenglin; Cheng, Hui-Ming; Bao, Xinhe
2017-04-25
Micro-supercapacitors (MSCs) hold great promise as highly competitive miniaturized power sources satisfying the increased demand of smart integrated electronics. However, single-step scalable fabrication of MSCs with both high energy and power densities is still challenging. Here we demonstrate the scalable fabrication of graphene-based monolithic MSCs with diverse planar geometries and capable of superior integration by photochemical reduction of graphene oxide/TiO 2 nanoparticle hybrid films. The resulting MSCs exhibit high volumetric capacitance of 233.0 F cm -3 , exceptional flexibility, and remarkable capacity of modular serial and parallel integration in aqueous gel electrolyte. Furthermore, by precisely engineering the interface of electrode with electrolyte, these monolithic MSCs can operate well in a hydrophobic electrolyte of ionic liquid (3.0 V) at a high scan rate of 200 V s -1 , two orders of magnitude higher than those of conventional supercapacitors. More notably, the MSCs show landmark volumetric power density of 312 W cm -3 and energy density of 7.7 mWh cm -3 , both of which are among the highest values attained for carbon-based MSCs. Therefore, such monolithic MSC devices based on photochemically reduced, compact graphene films possess enormous potential for numerous miniaturized, flexible electronic applications.
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couture, Aaron Joseph; Casten, Richard F.; Cakirli, R. B.
Here, neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40%, and has limited predictive power, with predictions from different models rapidly differing by an order ofmore » magnitude a few nucleons from the last measurement.« less
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
Couture, Aaron Joseph; Casten, Richard F.; Cakirli, R. B.
2017-12-20
Here, neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40%, and has limited predictive power, with predictions from different models rapidly differing by an order ofmore » magnitude a few nucleons from the last measurement.« less
Flexible solid-state supercapacitors based on carbon nanoparticles/MnO2 nanorods hybrid structure.
Yuan, Longyan; Lu, Xi-Hong; Xiao, Xu; Zhai, Teng; Dai, Junjie; Zhang, Fengchao; Hu, Bin; Wang, Xue; Gong, Li; Chen, Jian; Hu, Chenguo; Tong, Yexiang; Zhou, Jun; Wang, Zhong Lin
2012-01-24
A highly flexible solid-state supercapacitor was fabricated through a simple flame synthesis method and electrochemical deposition process based on a carbon nanoparticles/MnO(2) nanorods hybrid structure using polyvinyl alcohol/H(3)PO(4) electrolyte. Carbon fabric is used as a current collector and electrode (mechanical support), leading to a simplified, highly flexible, and lightweight architecture. The device exhibited good electrochemical performance with an energy density of 4.8 Wh/kg at a power density of 14 kW/kg, and a demonstration of a practical device is also presented, highlighting the path for its enormous potential in energy management. © 2011 American Chemical Society
Development of an Updated Societal-Risk Goal for Nuclear Power Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicki Bier; Michael Corradini; Robert Youngblood
2014-07-01
This report briefly summarizes work done in FY 2013 on the subject LDRD. The working hypothesis is that societal disruption should be addressed in a safety goal. This is motivated by the point that the Fukushima disaster resulted in very little public dose, but enormous societal disruption; a goal that addressed societal disruption would fill a perceived gap in the US NRC safety goal structure. This year's work entailed analyzing the consequences of postulated accidents at various reactor sites in the US, specifically with a view to quantifying the number of people relocated and the duration of their relocation, tomore » see whether this makes sense as a measure of societal disruption.« less
Microwave-optical two-photon excitation of Rydberg states
NASA Astrophysics Data System (ADS)
Tate, D. A.; Gallagher, T. F.
2018-03-01
We report efficient microwave-optical two photon excitation of Rb Rydberg atoms in a magneto-optical trap. This approach allows the excitation of normally inaccessible states and provides a path toward excitation of high-angular-momentum states. The efficiency stems from the elimination of the Doppler width, the use of a narrow-band pulsed laser, and the enormous electric-dipole matrix element connecting the intermediate and final states of the transition. The excitation is efficient in spite of the low optical and microwave powers, of order 1 kW and 1 mW, respectively. This is an application of the large dipole coupling strengths between Rydberg states to achieve two-photon excitation of Rydberg atoms.
The SMAT fiber laser for industrial applications
NASA Astrophysics Data System (ADS)
Ding, Jianwu; Liu, Jinghui; Wei, Xi; Xu, Jun
2017-02-01
With the increased adoption of high power fiber laser for various industrial applications, the downtime and the reliability of fiber lasers become more and more important. Here we present our approach toward a more reliable and more intelligent laser source for industrial applications: the SMAT fiber laser with the extensive sensor network and multi-level protection mechanism, the mobile connection and the mobile App, and the Smart Cloud. The proposed framework is the first IoT (Internet of Things) approach integrated in an industrial laser not only prolongs the reliability of an industrial laser but open up enormous potential for value-adding services by gathering and analyzing the Big data from the connected SMAT lasers.
Sengupta, Mitu
2010-01-01
This article contests the characterisation of the popular and acclaimed film, Slumdog Millionaire, as a realistic portrayal of India's urban poverty that will ultimately serve as a tool of advocacy for India's urban poor. It argues that the film's reductive view of slum-spaces will more probably reinforce negative attitudes towards slum-dwellers, lending credibility to the sorts of policies that have historically dispossessed them of power and dignity. By drawing attention to the film's celebration of characters and spaces that symbolise Western culture and Northern trajectories of 'development', the article also critically engages with some of the issues raised by the film's enormous success.
Influence of preheating on grindability of coal
Lytle, J.; Choi, N.; Prisbrey, K.
1992-01-01
Enormous quantities of coal must be ground as feed to power generation facilities. The energy cost of grinding is significant at 5 to 15 kWh/ton. If grindability could be increased by preheating the coal with waste heat, energy costs could be reduced. The objective of this work was to determine how grindability was affected by preheating. The method was to use population balance grinding models to interpret results of grinding coal before and after a heat treatment. Simulation of locked cycle tests gave a 40% increase in grindability. Approximately 40% grinding energy saving can be expected. By using waste heat for coal treatment, the targeted energy savings would be maintained. ?? 1992.
Annual Rainfall Forecasting by Using Mamdani Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Fallah-Ghalhary, G.-A.; Habibi Nokhandan, M.; Mousavi Baygi, M.
2009-04-01
Long-term rainfall prediction is very important to countries thriving on agro-based economy. In general, climate and rainfall are highly non-linear phenomena in nature giving rise to what is known as "butterfly effect". The parameters that are required to predict the rainfall are enormous even for a short period. Soft computing is an innovative approach to construct computationally intelligent systems that are supposed to possess humanlike expertise within a specific domain, adapt themselves and learn to do better in changing environments, and explain how they make decisions. Unlike conventional artificial intelligence techniques the guiding principle of soft computing is to exploit tolerance for imprecision, uncertainty, robustness, partial truth to achieve tractability, and better rapport with reality. In this paper, 33 years of rainfall data analyzed in khorasan state, the northeastern part of Iran situated at latitude-longitude pairs (31°-38°N, 74°- 80°E). this research attempted to train Fuzzy Inference System (FIS) based prediction models with 33 years of rainfall data. For performance evaluation, the model predicted outputs were compared with the actual rainfall data. Simulation results reveal that soft computing techniques are promising and efficient. The test results using by FIS model showed that the RMSE was obtained 52 millimeter.
The EPA Comptox Chemistry Dashboard: A Web-Based Data ...
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data but recent developments have focused on the development of a new software architecture that assembles the resources into a single platform. A new web application, the CompTox Chemistry Dashboard provides access to data associated with ~720,000 chemical substances. These data include experimental and predicted physicochemical property data, bioassay screening data associated with the ToxCast program, product and functional use information and a myriad of related data of value to environmental scientists. The dashboard provides chemical-based searching based on chemical names, synonyms and CAS Registry Numbers. Flexible search capabilities allow for chemical identificati
Kisała, Joanna; Heclik, Kinga I; Pogocki, Krzysztof; Pogocki, Dariusz
2018-05-16
The blood-brain barrier (BBB) is a complex system controlling two-way substances traffic between circulatory (cardiovascular) system and central nervous system (CNS). It is almost perfectly crafted to regulate brain homeostasis and to permit selective transport of molecules that are essential for brain function. For potential drug candidates, the CNS-oriented neuropharmaceuticals as well as for those of primary targets in the periphery, the extent to which a substance in the circulation gains access to the CNS seems crucial. With the advent of nanopharmacology the problem of the BBB permeability for drug nano-carriers gains new significance. Compare to some other fields of medicinal chemistry, the computational science of nanodelivery is still prematured to offer the black-box type solutions, especially for the BBB-case. However, even its enormous complexity can be spell out the physical principles, and as such subjected to computation. Basic understanding of various physico-chemical parameters describing the brain uptake is required to take advantage of their usage for the BBB-nanodelivery. This mini-review provides a sketchy introduction into essential concepts allowing application of computational simulation to the BBB-nanodelivery design. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The EPA CompTox Chemistry Dashboard - an online resource ...
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data. Recent work has focused on the development of a new architecture that assembles the resources into a single platform. With a focus on delivering access to Open Data streams, web service integration accessibility and a user-friendly web application the CompTox Dashboard provides access to data associated with ~720,000 chemical substances. These data include research data in the form of bioassay screening data associated with the ToxCast program, experimental and predicted physicochemical properties, product and functional use information and related data of value to environmental scientists. This presentation will provide an overview of the CompTox Dashboard and its va
Prototyping Instruments for Chemical Laboratory Using Inexpensive Electronic Modules.
Urban, Pawel L
2018-05-15
Open-source electronics and programming can augment chemical and biomedical research. Currently, chemists can choose from a broad range of low-cost universal electronic modules (microcontroller boards and single-board computers) and use them to assemble working prototypes of scientific tools to address specific experimental problems and to support daily research work. The learning time can be as short as a few hours, and the required budget is often as low as 50 USD. Prototyping instruments using low-cost electronic modules gives chemists enormous flexibility to design and construct customized instrumentation, which can reduce the delays caused by limited access to high-end commercial platforms. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Heterotic computing: exploiting hybrid computational devices.
Kendon, Viv; Sebald, Angelika; Stepney, Susan
2015-07-28
Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Expression Templates for Truncated Power Series
NASA Astrophysics Data System (ADS)
Cary, John R.; Shasharina, Svetlana G.
1997-05-01
Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.
Systems and methods for rapid processing and storage of data
Stalzer, Mark A.
2017-01-24
Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
NASA Technical Reports Server (NTRS)
Szabo, James J.
2015-01-01
This Phase II project is developing a magnesium (Mg) Hall effect thruster system that would open the door for in situ resource utilization (ISRU)-based solar system exploration. Magnesium is light and easy to ionize. For a Mars- Earth transfer, the propellant mass savings with respect to a xenon Hall effect thruster (HET) system are enormous. Magnesium also can be combusted in a rocket with carbon dioxide (CO2) or water (H2O), enabling a multimode propulsion system with propellant sharing and ISRU. In the near term, CO2 and H2O would be collected in situ on Mars or the moon. In the far term, Mg itself would be collected from Martian and lunar regolith. In Phase I, an integrated, medium-power (1- to 3-kW) Mg HET system was developed and tested. Controlled, steady operation at constant voltage and power was demonstrated. Preliminary measurements indicate a specific impulse (Isp) greater than 4,000 s was achieved at a discharge potential of 400 V. The feasibility of delivering fluidized Mg powder to a medium- or high-power thruster also was demonstrated. Phase II of the project evaluated the performance of an integrated, highpower Mg Hall thruster system in a relevant space environment. Researchers improved the medium power thruster system and characterized it in detail. Researchers also designed and built a high-power (8- to 20-kW) Mg HET. A fluidized powder feed system supporting the high-power thruster was built and delivered to Busek Company, Inc.
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.
1979-01-01
The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
2014-01-01
Background Integrating and analyzing heterogeneous genome-scale data is a huge algorithmic challenge for modern systems biology. Bipartite graphs can be useful for representing relationships across pairs of disparate data types, with the interpretation of these relationships accomplished through an enumeration of maximal bicliques. Most previously-known techniques are generally ill-suited to this foundational task, because they are relatively inefficient and without effective scaling. In this paper, a powerful new algorithm is described that produces all maximal bicliques in a bipartite graph. Unlike most previous approaches, the new method neither places undue restrictions on its input nor inflates the problem size. Efficiency is achieved through an innovative exploitation of bipartite graph structure, and through computational reductions that rapidly eliminate non-maximal candidates from the search space. An iterative selection of vertices for consideration based on non-decreasing common neighborhood sizes boosts efficiency and leads to more balanced recursion trees. Results The new technique is implemented and compared to previously published approaches from graph theory and data mining. Formal time and space bounds are derived. Experiments are performed on both random graphs and graphs constructed from functional genomics data. It is shown that the new method substantially outperforms the best previous alternatives. Conclusions The new method is streamlined, efficient, and particularly well-suited to the study of huge and diverse biological data. A robust implementation has been incorporated into GeneWeaver, an online tool for integrating and analyzing functional genomics experiments, available at http://geneweaver.org. The enormous increase in scalability it provides empowers users to study complex and previously unassailable gene-set associations between genes and their biological functions in a hierarchical fashion and on a genome-wide scale. This practical computational resource is adaptable to almost any applications environment in which bipartite graphs can be used to model relationships between pairs of heterogeneous entities. PMID:24731198
An Interactive Computer Tool for Teaching About Desalination and Managing Water Demand in the US
NASA Astrophysics Data System (ADS)
Ziolkowska, J. R.; Reyes, R.
2016-12-01
This paper presents an interactive tool to geospatially and temporally analyze desalination developments and trends in the US in the time span 1950-2013, its current contribution to satisfying water demands and its future potentials. The computer tool is open access and can be used by any user with Internet connection, thus facilitating interactive learning about water resources. The tool can also be used by stakeholders and policy makers for decision-making support and with designing sustainable water management strategies. Desalination technology has been acknowledged as a solution to a sustainable water demand management stemming from many sectors, including municipalities, industry, agriculture, power generation, and other users. Desalination has been applied successfully in the US and many countries around the world since 1950s. As of 2013, around 1,336 desalination plants were operating in the US alone, with a daily production capacity of 2 BGD (billion gallons per day) (GWI, 2013). Despite a steady increase in the number of new desalination plants and growing production capacity, in many regions, the costs of desalination are still prohibitive. At the same time, the technology offers a tremendous potential for `enormous supply expansion that exceeds all likely demands' (Chowdhury et al., 2013). The model and tool are based on data from Global Water Intelligence (GWI, 2013). The analysis shows that more than 90% of all the plants in the US are small-scale plants with the capacity below 4.31 MGD. Most of the plants (and especially larger plants) are located on the US East Coast, as well as in California, Texas, Oklahoma, and Florida. The models and the tool provide information about economic feasibility of potential new desalination plants based on the access to feed water, energy sources, water demand, and experiences of other plants in that region.
Raith, Stefan; Vogel, Eric Per; Anees, Naeema; Keul, Christine; Güth, Jan-Frederik; Edelhoff, Daniel; Fischer, Horst
2017-01-01
Chairside manufacturing based on digital image acquisition is gainingincreasing importance in dentistry. For the standardized application of these methods, it is paramount to have highly automated digital workflows that can process acquired 3D image data of dental surfaces. Artificial Neural Networks (ANNs) arenumerical methods primarily used to mimic the complex networks of neural connections in the natural brain. Our hypothesis is that an ANNcan be developed that is capable of classifying dental cusps with sufficient accuracy. This bears enormous potential for an application in chairside manufacturing workflows in the dental field, as it closes the gap between digital acquisition of dental geometries and modern computer-aided manufacturing techniques.Three-dimensional surface scans of dental casts representing natural full dental arches were transformed to range image data. These data were processed using an automated algorithm to detect candidates for tooth cusps according to salient geometrical features. These candidates were classified following common dental terminology and used as training data for a tailored ANN.For the actual cusp feature description, two different approaches were developed and applied to the available data: The first uses the relative location of the detected cusps as input data and the second method directly takes the image information given in the range images. In addition, a combination of both was implemented and investigated.Both approaches showed high performance with correct classifications of 93.3% and 93.5%, respectively, with improvements by the combination shown to be minor.This article presents for the first time a fully automated method for the classification of teeththat could be confirmed to work with sufficient precision to exhibit the potential for its use in clinical practice,which is a prerequisite for automated computer-aided planning of prosthetic treatments with subsequent automated chairside manufacturing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
Crowdfunding for Personalized Medicine Research.
Fumagalli, Danielle C; Gouw, Arvin M
2015-12-01
Given the current funding situation of the National Institutes of Health, getting funding for rare disease research is extremely difficult. In light of the enormous potential for research in the rare diseases and the scarcity of research funding, we provide a case study of a novel successful crowdfunding approach at a non-profit organization called Rare Genomics Institute. We partner with biotechnology companies willing to donate their products, such as mouse models, gene editing software, and sequencing services, for which researchers can apply. First, we find that personal stories can be powerful tools to seek funding from sympathetic donors who do not have the same rational considerations of impact and profit. Second, for foundations facing funding restrictions, company donations can be a valuable tool in addition to crowdfunding. Third, rare disease research is particularly rewarding for scientists as they proceed to be pioneers in the field during their academic careers. Overall, by connecting donors, foundations, researchers, and patients, crowdfunding has become a powerful alternative funding mechanism for personalized medicine.
Crowdfunding for Personalized Medicine Research
Fumagalli, Danielle C.; Gouw, Arvin M.
2015-01-01
Given the current funding situation of the National Institutes of Health, getting funding for rare disease research is extremely difficult. In light of the enormous potential for research in the rare diseases and the scarcity of research funding, we provide a case study of a novel successful crowdfunding approach at a non-profit organization called Rare Genomics Institute. We partner with biotechnology companies willing to donate their products, such as mouse models, gene editing software, and sequencing services, for which researchers can apply. First, we find that personal stories can be powerful tools to seek funding from sympathetic donors who do not have the same rational considerations of impact and profit. Second, for foundations facing funding restrictions, company donations can be a valuable tool in addition to crowdfunding. Third, rare disease research is particularly rewarding for scientists as they proceed to be pioneers in the field during their academic careers. Overall, by connecting donors, foundations, researchers, and patients, crowdfunding has become a powerful alternative funding mechanism for personalized medicine. PMID:26604866
NASA Astrophysics Data System (ADS)
Gribkov, V. A.; Miklaszewski, R.; Paduch, M.; Zielinska, E.; Chernyshova, M.; Pisarczyk, T.; Pimenov, V. N.; Demina, E. V.; Niemela, J.; Crespo, M.-L.; Cicuttin, A.; Tomaszewski, K.; Sadowski, M. J.; Skladnik-Sadowska, E.; Pytel, K.; Zawadka, A.; Giannini, G.; Longo, F.; Talab, A.; Ul'yanenko, S. E.
2015-03-01
The paper presents some outcomes obtained during the year of 2013 of the activity in the frame of the International Atomic Energy Agency Co-ordinated research project "Investigations of Materials under High Repetition and Intense Fusion-Relevant Pulses". The main results are related to the effects created at the interaction of powerful pulses of different types of radiation (soft and hard X-rays, hot plasma and fast ion streams, neutrons, etc. generated in Dense Plasma Focus (DPF) facilities) with various materials including those that are counted as perspective ones for their use in future thermonuclear reactors. Besides we discuss phenomena observed at the irradiation of biological test objects. We examine possible applications of nanosecond powerful pulses of neutrons to the aims of nuclear medicine and for disclosure of hidden illegal objects. Special attention is devoted to discussions of a possibility to create extremely large and enormously diminutive DPF devices and probabilities of their use in energetics, medicine and modern electronics.
NASA Technical Reports Server (NTRS)
Noreen, Gary K.
1991-01-01
The RadioSat network under development by radio Satellite Corporation will use mobile satellite (MSAT) technology to provide diverse personal communications, broadcast, and navigation services. The network will support these services simultaneously for integrated mobile radios throughout Canada and the United States. The RadioSat network takes advantage of several technological breakthroughs, all coming to fruition by the time the first MSAT satellite is launched in 1994. The most important of these breakthroughs is the enormous radiated power of each MSAT spacecraft - orders of magnitude greater than the radiated power of previous L-band spacecraft. Another important breakthrough is the development of advanced digital audio compression algorithms, enabling the transmission of broadcast quality music at moderate data rates. Finally, continuing dramatic increases in VLSI capabilities permit the production of complex, multi-function mobile satellite radios in very large quantities at prices little more than those of conventional car radios. In addition to performance breakthroughs and their economic implications to RadioSat, the design of the RadioSat network is reviewed.
Integrating CO₂ storage with geothermal resources for dispatchable renewable electricity
Buscheck, Thomas A.; Bielicki, Jeffrey M.; Chen, Mingjie; ...
2014-12-31
We present an approach that uses the huge fluid and thermal storage capacity of the subsurface, together with geologic CO₂ storage, to harvest, store, and dispatch energy from subsurface (geothermal) and surface (solar, nuclear, fossil) thermal resources, as well as energy from electrical grids. Captured CO₂ is injected into saline aquifers to store pressure, generate artesian flow of brine, and provide an additional working fluid for efficient heat extraction and power conversion. Concentric rings of injection and production wells are used to create a hydraulic divide to store pressure, CO₂, and thermal energy. Such storage can take excess power frommore » the grid and excess/waste thermal energy, and dispatch that energy when it is demanded, enabling increased penetration of variable renewables. Stored CO₂ functions as a cushion gas to provide enormous pressure-storage capacity and displaces large quantities of brine, which can be desalinated and/or treated for a variety of beneficial uses.« less
Characteristics of wood ash and influence on soil properties and nutrient uptake: an overview.
Demeyer, A; Voundi Nkana, J C; Verloo, M G
2001-05-01
Wood industries and power plants generate enormous quantities of wood ash. Disposal in landfills has been for long a common method for removal. New regulations for conserving the environment have raised the costs of landfill disposal and added to the difficulties for acquiring new sites for disposal. Over a few decades a number of studies have been carried out on the utilization of wood ashes in agriculture and forestry as an alternative method for disposal. Because of their properties and their influence on soil chemistry the utilization of wood ashes is particularly suited for the fertility management of tropical acid soils and forest soils. This review principally focuses on ash from the wood industry and power plants and considers its physical, chemical and mineralogical characteristics, its effect on soil properties, on the availability of nutrient elements and on the growth and chemical composition of crops and trees, as well as its impact on the environment.
Application of Advanced Wide Area Early Warning Systems with Adaptive Protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumstein, Carl; Cibulka, Lloyd; Thorp, James
2014-09-30
Recent blackouts of power systems in North America and throughout the world have shown how critical a reliable power system is to modern societies, and the enormous economic and societal damage a blackout can cause. It has been noted that unanticipated operation of protection systems can contribute to cascading phenomena and, ultimately, blackouts. This project developed and field-tested two methods of Adaptive Protection systems utilizing synchrophasor data. One method detects conditions of system stress that can lead to unintended relay operation, and initiates a supervisory signal to modify relay response in real time to avoid false trips. The second methodmore » detects the possibility of false trips of impedance relays as stable system swings “encroach” on the relays’ impedance zones, and produces an early warning so that relay engineers can re-evaluate relay settings. In addition, real-time synchrophasor data produced by this project was used to develop advanced visualization techniques for display of synchrophasor data to utility operators and engineers.« less
Computer assisted axial tomography (Emi scan) in neurologic investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J.K.; Baker, H.L.; Laws, E.R. Jr.
1974-01-01
Cerebral angiography, pneumoencephalography, and radioisotope brain scan with their differing diagnostic abilities have provided the neurologist and neurosurgeon with extremely valuable diagnostic techniques. It is doubtful, however, if any of these now conventional methods had the enormous impact on the practice of neurology that computer assisted axial tomography (C.A.T.) is beginning to have. Here, for the first time, is a test which, without significant risk or discomfort, can demonstrate some normal intracranial (and intraorbital) structures, can demonstrate some normal intracranial pathology and, in many cases, can make the potentially dangerous contrast studies unnecessary. After only a few months experience withmore » C.A.T. in a clinical setting, it has become obvious that there will have to be a reappraisal of the accepted investigative work-up of many conditions. While it is too early to have formulated strict criteria for the use of C.A.T., this paper is an attempt to show how the technique is proving useful in the investigation of many conditions.« less
Preparing for in situ processing on upcoming leading-edge supercomputers
Kress, James; Churchill, Randy Michael; Klasky, Scott; ...
2016-10-01
High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less
Multimedia content analysis, management and retrieval: trends and challenges
NASA Astrophysics Data System (ADS)
Hanjalic, Alan; Sebe, Nicu; Chang, Edward
2006-01-01
Recent advances in computing, communications and storage technology have made multimedia data become prevalent. Multimedia has gained enormous potential in improving the processes in a wide range of fields, such as advertising and marketing, education and training, entertainment, medicine, surveillance, wearable computing, biometrics, and remote sensing. Rich content of multimedia data, built through the synergies of the information contained in different modalities, calls for new and innovative methods for modeling, processing, mining, organizing, and indexing of this data for effective and efficient searching, retrieval, delivery, management and sharing of multimedia content, as required by the applications in the abovementioned fields. The objective of this paper is to present our views on the trends that should be followed when developing such methods, to elaborate on the related research challenges, and to introduce the new conference, Multimedia Content Analysis, Management and Retrieval, as a premium venue for presenting and discussing these methods with the scientific community. Starting from 2006, the conference will be held annually as a part of the IS&T/SPIE Electronic Imaging event.
GPU accelerated manifold correction method for spinning compact binaries
NASA Astrophysics Data System (ADS)
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
WARP: Weight Associative Rule Processor. A dedicated VLSI fuzzy logic megacell
NASA Technical Reports Server (NTRS)
Pagni, A.; Poluzzi, R.; Rizzotto, G. G.
1992-01-01
During the last five years Fuzzy Logic has gained enormous popularity in the academic and industrial worlds. The success of this new methodology has led the microelectronics industry to create a new class of machines, called Fuzzy Machines, to overcome the limitations of traditional computing systems when utilized as Fuzzy Systems. This paper gives an overview of the methods by which Fuzzy Logic data structures are represented in the machines (each with its own advantages and inefficiencies). Next, the paper introduces WARP (Weight Associative Rule Processor) which is a dedicated VLSI megacell allowing the realization of a fuzzy controller suitable for a wide range of applications. WARP represents an innovative approach to VLSI Fuzzy controllers by utilizing different types of data structures for characterizing the membership functions during the various stages of the Fuzzy processing. WARP dedicated architecture has been designed in order to achieve high performance by exploiting the computational advantages offered by the different data representations.
Adapting bioinformatics curricula for big data.
Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H
2016-01-01
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.
Control Theory based Shape Design for the Incompressible Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Cowles, G.; Martinelli, L.
2003-12-01
A design method for shape optimization in incompressible turbulent viscous flow has been developed and validated for inverse design. The gradient information is determined using a control theory based algorithm. With such an approach, the cost of computing the gradient is negligible. An additional adjoint system must be solved which requires the cost of a single steady state flow solution. Thus, this method has an enormous advantage over traditional finite-difference based algorithms. The method of artificial compressibility is utilized to solve both the flow and adjoint systems. An algebraic turbulence model is used to compute the eddy viscosity. The method is validated using several inverse wing design test cases. In each case, the program must modify the shape of the initial wing such that its pressure distribution matches that of the target wing. Results are shown for the inversion of both finite thickness wings as well as zero thickness wings which can be considered a model of yacht sails.
Integrated risk/cost planning models for the US Air Traffic system
NASA Technical Reports Server (NTRS)
Mulvey, J. M.; Zenios, S. A.
1985-01-01
A prototype network planning model for the U.S. Air Traffic control system is described. The model encompasses the dual objectives of managing collision risks and transportation costs where traffic flows can be related to these objectives. The underlying structure is a network graph with nonseparable convex costs; the model is solved efficiently by capitalizing on its intrinsic characteristics. Two specialized algorithms for solving the resulting problems are described: (1) truncated Newton, and (2) simplicial decomposition. The feasibility of the approach is demonstrated using data collected from a control center in the Midwest. Computational results with different computer systems are presented, including a vector supercomputer (CRAY-XMP). The risk/cost model has two primary uses: (1) as a strategic planning tool using aggregate flight information, and (2) as an integrated operational system for forecasting congestion and monitoring (controlling) flow throughout the U.S. In the latter case, access to a supercomputer is required due to the model's enormous size.
Adapting bioinformatics curricula for big data
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.
2016-01-01
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
Emissive flat panel displays: A challenge to the AMLCD
NASA Astrophysics Data System (ADS)
Walko, R. J.
According to some sources, flat panel displays (FPD's) for computers will represent a 20-40 billion dollar industry by the end of the decade and could leverage up to 100-200 billion dollars in computer sales. Control of the flat panel display industry could be a significant factor in the global economy if FPD's manage to tap into the enormous audio/visual consumer market. Japan presently leads the world in active matrix liquid crystal display (AMLCD) manufacturing, the current leading FPD technology. The AMLCD is basically a light shutter which does not emit light on its own, but modulates the intensity of a separate backlight. However, other technologies, based on light emitting phosphors, could eventually challenge the AMLCD's lead position. These light-emissive technologies do not have the size, temperature and viewing angle limitations of AMLCD's. In addition, they could also be less expensive to manufacture, and require a smaller capital outlay for a manufacturing plant. An overview of these alternative technologies is presented.
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
Computational Nanotechnology at NASA Ames Research Center, 1996
NASA Technical Reports Server (NTRS)
Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.
Computer memory power control for the Galileo spacecraft
NASA Technical Reports Server (NTRS)
Detwiler, R. C.
1983-01-01
The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.
Takahashi, Kouhei; Kanno, Tsutomu; Sakai, Akihiro; Tamaki, Hiromasa; Kusada, Hideo; Yamada, Yuka
2013-01-01
Enormously large amount of heat produced by human activities is now mostly wasted into the environment without use. To realize a sustainable society, it is important to develop practical solutions for waste heat recovery. Here, we demonstrate that a tubular thermoelectric device made of tilted multilayer of Bi(0.5)Sb(1.5)Te3/Ni provides a promising solution. The Bi(0.5)Sb(1.5)Te3/Ni tube allows tightly sealed fluid flow inside itself, and operates in analogy with the standard shell and tube heat exchanger. We show that it achieves perfect balance between efficient heat exchange and high-power generation with a heat transfer coefficient of 4.0 kW/m(2)K and a volume power density of 10 kW/m(3) using low-grade heat sources below 100°C. The Bi(0.5)Sb(1.5)Te3/Ni tube thus serves as a power generator and a heat exchanger within a single unit, which is advantageous for developing new cogeneration systems in factories, vessels, and automobiles where cooling of excess heat is routinely carried out.
Kochak, Gregory M; Mangat, Surinder
2002-12-23
Despite an enormous body of research investigating the mass transfer of D-glucose through biological membranes, carrier-mediated and first-order models have remained the prevalent models describing glucose's quantitative behavior even though they have proven to be inadequate over extended concentration ranges. Recent evidence from GLUT2 knockout studies further questions our understanding of molecular models, especially those employing Michaelis-Menten (MM)-type kinetic models. In this report, evidence is provided that D-glucose is absorbed by rat intestinal epithelium by a combination of convective ultrafiltration and nonlinear diffusion. The diffusive component of mass transfer is described by a concentration-dependent permeability coefficient, modeled as a fractal power function. Glucose and sodium chloride-dependent-induced aqueous convection currents are the result of prevailing oncotic and osmotic pressure effects, and a direct effect of glucose and sodium chloride on intestinal epithelium resulting in enhanced glucose, sodium ion, and water mobility. The fractal power model of glucose diffusion was superior to the conventional MM description. A convection-diffusion model of mass transfer adequately characterized glucose mass transfer over a 105-fold glucose concentration range in the presence and absence of sodium ion.
Takahashi, Kouhei; Kanno, Tsutomu; Sakai, Akihiro; Tamaki, Hiromasa; Kusada, Hideo; Yamada, Yuka
2013-01-01
Enormously large amount of heat produced by human activities is now mostly wasted into the environment without use. To realize a sustainable society, it is important to develop practical solutions for waste heat recovery. Here, we demonstrate that a tubular thermoelectric device made of tilted multilayer of Bi0.5Sb1.5Te3/Ni provides a promising solution. The Bi0.5Sb1.5Te3/Ni tube allows tightly sealed fluid flow inside itself, and operates in analogy with the standard shell and tube heat exchanger. We show that it achieves perfect balance between efficient heat exchange and high-power generation with a heat transfer coefficient of 4.0 kW/m2K and a volume power density of 10 kW/m3 using low-grade heat sources below 100°C. The Bi0.5Sb1.5Te3/Ni tube thus serves as a power generator and a heat exchanger within a single unit, which is advantageous for developing new cogeneration systems in factories, vessels, and automobiles where cooling of excess heat is routinely carried out. PMID:23511347
Grid Computing in K-12 Schools. Soapbox Digest. Volume 3, Number 2, Fall 2004
ERIC Educational Resources Information Center
AEL, 2004
2004-01-01
Grid computing allows large groups of computers (either in a lab, or remote and connected only by the Internet) to extend extra processing power to each individual computer to work on components of a complex request. Grid middleware, recognizing priorities set by systems administrators, allows the grid to identify and use this power without…
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.
Smart Collaborative Caching for Information-Centric IoT in Fog Computing.
Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke
2017-11-01
The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.
Smart Collaborative Caching for Information-Centric IoT in Fog Computing
Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Zhang, Hong-Ke
2017-01-01
The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies. PMID:29104219
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Cheng, Jiahao
2018-02-01
Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.
Virtual microscopy and digital pathology in training and education.
Hamilton, Peter W; Wang, Yinhai; McCullough, Stephen J
2012-04-01
Traditionally, education and training in pathology has been delivered using textbooks, glass slides and conventional microscopy. Over the last two decades, the number of web-based pathology resources has expanded dramatically with centralized pathological resources being delivered to many students simultaneously. Recently, whole slide imaging technology allows glass slides to be scanned and viewed on a computer screen via dedicated software. This technology is referred to as virtual microscopy and has created enormous opportunities in pathological training and education. Students are able to learn key histopathological skills, e.g. to identify areas of diagnostic relevance from an entire slide, via a web-based computer environment. Students no longer need to be in the same room as the slides. New human-computer interfaces are also being developed using more natural touch technology to enhance the manipulation of digitized slides. Several major initiatives are also underway introducing online competency and diagnostic decision analysis using virtual microscopy and have important future roles in accreditation and recertification. Finally, researchers are investigating how pathological decision-making is achieved using virtual microscopy and modern eye-tracking devices. Virtual microscopy and digital pathology will continue to improve how pathology training and education is delivered. © 2012 The Authors APMIS © 2012 APMIS.
Model selection for the North American Breeding Bird Survey: A comparison of methods
Link, William; Sauer, John; Niven, Daniel
2017-01-01
The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.
Planning/scheduling techniques for VQ-based image compression
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.
1994-01-01
The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.
Simulation and Experimental Study of Metal Organic Frameworks Used in Adsorption Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenks, Jeromy J.; Motkuri, Radha K.; TeGrotenhuis, Ward
2016-10-11
Metal-organic frameworks (MOFs) have recently attracted enormous interest over the past few years in energy storage and gas separation, yet there have been few reports for adsorption cooling applications. Adsorption cooling technology is an established alternative to mechanical vapor compression refrigeration systems and is an excellent alternative in industrial environments where waste heat is available. We explored the use of MOFs that have very high mass loading and relatively low heats of adsorption, with certain combinations of refrigerants to demonstrate a new type of highly efficient adsorption chiller. Computational fluid dynamics combined with a system level lumped-parameter model have beenmore » used to project size and performance for chillers with a cooling capacity ranging from a few kW to several thousand kW. These systems rely on stacked micro/mini-scale architectures to enhance heat and mass transfer. Recent computational studies of an adsorption chiller based on MOFs suggests that a thermally-driven coefficient of performance greater than one may be possible, which would represent a fundamental breakthrough in performance of adsorption chiller technology. Presented herein are computational and experimental results for hydrophyilic and fluorophilic MOFs.« less
Enhanced Graphics for Extended Scale Range
NASA Technical Reports Server (NTRS)
Hanson, Andrew J.; Chi-Wing Fu, Philip
2012-01-01
Enhanced Graphics for Extended Scale Range is a computer program for rendering fly-through views of scene models that include visible objects differing in size by large orders of magnitude. An example would be a scene showing a person in a park at night with the moon, stars, and galaxies in the background sky. Prior graphical computer programs exhibit arithmetic and other anomalies when rendering scenes containing objects that differ enormously in scale and distance from the viewer. The present program dynamically repartitions distance scales of objects in a scene during rendering to eliminate almost all such anomalies in a way compatible with implementation in other software and in hardware accelerators. By assigning depth ranges correspond ing to rendering precision requirements, either automatically or under program control, this program spaces out object scales to match the precision requirements of the rendering arithmetic. This action includes an intelligent partition of the depth buffer ranges to avoid known anomalies from this source. The program is written in C++, using OpenGL, GLUT, and GLUI standard libraries, and nVidia GEForce Vertex Shader extensions. The program has been shown to work on several computers running UNIX and Windows operating systems.
Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations
2007-08-31
very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
2011-09-01
supply for the IMU switching 5, 12V ATX power supply for the computer and hard drive An L1/L2 active antenna on small back plane USB to serial...switching 5, 12V ATX power supply for the computer and hard drive Figure 4. UAS Target Location Technology for Ground Based Observers (TLGBO...15V power supply for the IMU H. switching 5, 12V ATX power supply for the computer & hard drive I. An L1/L2 active antenna on a small back
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-12-04
The software serves two purposes. The first purpose of the software is to prototype the Sandia High Performance Computing Power Application Programming Interface Specification effort. The specification can be found at http://powerapi.sandia.gov . Prototypes of the specification were developed in parallel with the development of the specification. Release of the prototype will be instructive to anyone who intends to implement the specification. More specifically, our vendor collaborators will benefit from the availability of the prototype. The second is in direct support of the PowerInsight power measurement device, which was co-developed with Penguin Computing. The software provides a cluster wide measurementmore » capability enabled by the PowerInsight device. The software can be used by anyone who purchases a PowerInsight device. The software will allow the user to easily collect power and energy information of a node that is instrumented with PowerInsight. The software can also be used as an example prototype implementation of the High Performance Computing Power Application Programming Interface Specification.« less
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...
2018-03-22
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
Computer modeling and simulators as part of university training for NPP operating personnel
NASA Astrophysics Data System (ADS)
Volman, M.
2017-01-01
This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.
The Experimental Mathematician: The Pleasure of Discovery and the Role of Proof
ERIC Educational Resources Information Center
Borwein, Jonathan M.
2005-01-01
The emergence of powerful mathematical computing environments, the growing availability of correspondingly powerful (multi-processor) computers and the pervasive presence of the Internet allow for mathematicians, students and teachers, to proceed heuristically and "quasi-inductively." We may increasingly use symbolic and numeric computation,…
High-power beam combining: a step to a future laser weapon system
NASA Astrophysics Data System (ADS)
Protz, Rudolf; Zoz, Jürgen; Geidek, Franz; Dietrich, Stephan; Fall, Michael
2012-11-01
Due to the enormous progress in the field of high-power fiber lasers during the last years commercial industrial fiber lasers are now available, which deliver a near-diffraction limited beam with power levels up to10kW. For the realization of a future laser weapon system, which can be used for Counter-RAM or similar air defence applications, a laser source with a beam power at the level of 100kW or more is required. At MBDA Germany the concept for a high-energy laser weapon system is investigated, which is based on such existing industrial laser sources as mentioned before. A number of individual high-power fiber laser beams are combined together, using one common beam director telescope. By this "geometric" beam coupling scheme, sufficient laser beam power for an operational laser weapon system can be achieved. The individual beams from the different lasers are steered by servo-loops, using fast tip-tilt mirrors. This principle enables the concentration of the total laser beam power at the common focal point on a distant target, also allowing fine tracking of target movements and first order compensation of turbulence effects on laser beam propagation. The proposed beam combination concept was demonstrated using several experimental set-ups. Different experiments were performed, to investigate laser beam target interaction and target fine tracking also at large distances. Content and results of these investigations are reported. An example for the lay-out of an Air Defence High Energy Laser Weapon (ADHELW ) is given. It can be concluded, that geometric high-power beam combining is an important step for the realization of a laser weapon system in the near future.
Software Support for Transiently Powered Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Der Woude, Joel Matthew
With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less
NASA Technical Reports Server (NTRS)
Goltz, G.; Kaiser, L. M.; Weiner, H.
1977-01-01
A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.
Organization of the secure distributed computing based on multi-agent system
NASA Astrophysics Data System (ADS)
Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera
2018-04-01
Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
NASA Technical Reports Server (NTRS)
1974-01-01
The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1975-01-01
Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.
Small Universal Bacteria and Plasmid Computing Systems.
Wang, Xun; Zheng, Pan; Ma, Tongmao; Song, Tao
2018-05-29
Bacterial computing is a known candidate in natural computing, the aim being to construct "bacterial computers" for solving complex problems. In this paper, a new kind of bacterial computing system, named the bacteria and plasmid computing system (BP system), is proposed. We investigate the computational power of BP systems with finite numbers of bacteria and plasmids. Specifically, it is obtained in a constructive way that a BP system with 2 bacteria and 34 plasmids is Turing universal. The results provide a theoretical cornerstone to construct powerful bacterial computers and demonstrate a concept of paradigms using a "reasonable" number of bacteria and plasmids for such devices.
Maximizing Computational Capability with Minimal Power
2009-03-01
Chip -Scale Energy and Power... and Heat Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...OpticalBench Mounting Posts Imager Chip LCDinterfaced withthecomputer P o l a r i z e r P o l a r i z e r XYZ Translator Optical Slide VMM Computational Pixel...Signal routing power / memory: ? Power does not include comm off chip (i.e. accessing memory) Power = ½ C Vdd2 f for CMOS Chip to Chip (10pF load min
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... Documents Access and Management System (ADAMS): You may access publicly available documents online in the... Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants,'' issued for... Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION: Revision...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...
76 FR 40943 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-12
..., Revision 3, ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants.'' FOR FURTHER..., ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants,'' was issued with a temporary... Fuel Reprocessing Plants,'' to 10 CFR part 50 with regard to the use of computers in safety systems of...
Unity Power Factor Operated PFC Converter Based Power Supply for Computers
NASA Astrophysics Data System (ADS)
Singh, Shikha; Singh, Bhim; Bhuvaneswari, G.; Bist, Vashist
2017-11-01
Power Supplies (PSs) employed in personal computers pollute the single phase ac mains by drawing distorted current at a substandard Power Factor (PF). The harmonic distortion of the supply current in these personal computers are observed 75% to 90% with the Crest Factor (CF) being very high which escalates losses in the distribution system. To find a tangible solution to these issues, a non-isolated PFC converter is employed at the input of isolated converter that is capable of improving the input power quality apart from regulating the dc voltage at its output. This is given to the isolated stage that yields completely isolated and stiffly regulated multiple output voltages which is the prime requirement of computer PS. The operation of the proposed PS is evaluated under various operating conditions and the results show improved performance depicting nearly unity PF and low input current harmonics. The prototype of this PS is developed in laboratory environment and test results are recorded which corroborate the power quality improvement observed in simulation results under various operating conditions.
The cost of poor quality: an opportunity of enormous proportions.
Hughes, J M
1998-01-01
In all organizations, the state of finance is routinely reported in sublime detail for study and action. And yet, anywhere from 20 to 50 percent of the monies involved in that report are never identified as unnecessary and nonproductive. These monies, referred to as the Cost of Waste (COW), are the result of actions that have been taken or must be taken because quality is not served--inappropriate actions are being performed or appropriate actions are not being performed right the first time, every time. Proactively determining, reporting, and monitoring the COW brings a degree of objectivity to the quality management process and provides a powerful internal driver for performance improvement. A 10 step Cost of Waste system is proposed.
NASA Astrophysics Data System (ADS)
Lockwood, Timothy A.
Federal legislative changes in 2006 no longer entitle cogeneration project financings by law to receive the benefit of a power purchase agreement underwritten by an investment-grade investor-owned utility. Consequently, this research explored the need for a new market-risk model for future cogeneration and combined heat and power (CHP) project financing. CHP project investment represents a potentially enormous energy efficiency benefit through its application by reducing fossil fuel use up to 55% when compared to traditional energy generation, and concurrently eliminates constituent air emissions up to 50%, including global warming gases. As a supplemental approach to a comprehensive technical analysis, a quantitative multivariate modeling was also used to test the statistical validity and reliability of host facility energy demand and CHP supply ratios in predicting the economic performance of CHP project financing. The resulting analytical models, although not statistically reliable at this time, suggest a radically simplified CHP design method for future profitable CHP investments using four easily attainable energy ratios. This design method shows that financially successful CHP adoption occurs when the average system heat-to-power-ratio supply is less than or equal to the average host-convertible-energy-ratio, and when the average nominally-rated capacity is less than average host facility-load-factor demands. New CHP investments can play a role in solving the world-wide problem of accommodating growing energy demand while preserving our precious and irreplaceable air quality for future generations.
Center for Building Science: Annual report, FY 1986
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cairns, E.J.; Rosenfeld, A.H.
1987-05-01
The Center for Building Science consists of four programs in the Applied Science Division: energy analysis, buildings energy systems, windows and lighting, and indoor environment. It was established to provide an umbrella so that goups in different programs but with similar interests could combine to perform joint research, develop new research areas, share resources, and produce joint publications. As detailed below, potential savings for the U.S. society from energy efficient buildings are enormous. But these savings can only be realized through an expanding federal RandD program that develops expertise in this new area. The Center for Building Science develops efficientmore » new building componenets, computer models, data and information systems, and trains needed builidng scientists. 135 refs., 72 figs., 18 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brickell, E.F.; Simmons, G.J.
In the period since 1976, when Diffie and Hellman published the first discussion of two-key cryptography to appear in the open literature, only a handful of two-key cryptoalgorithms have been proposed - two of which are based on the knapsack problem. Consequently there was enormous interest when Shamir announced in early 1982 a cryptanalytic technique that could break many Merkle-Hellman knapsacks. In a rapid sequence of developments, Simmons and Brickell, Adleman, and Lagarias all announced other attacks on knapsack-based cryptosystems that were either computationally much more efficient or else directed at other knapsack schemes such as the Graham-Shamir or iteratedmore » systems. This paper analyzes the common features of knapsack-based cryptosystems and presents all of the cryptanalytic attacks made in 1982 from a unified viewpoint.« less
Applications of systems biology towards microbial fuel production.
Gowen, Christopher M; Fong, Stephen S
2011-10-01
Harnessing the immense natural diversity of biological functions for economical production of fuel has enormous potential benefits. Inevitably, however, the native capabilities for any given organism must be modified to increase the productivity or efficiency of a biofuel bioprocess. From a broad perspective, the challenge is to sufficiently understand the details of cellular functionality to be able to prospectively predict and modify the cellular function of a microorganism. Recent advances in experimental and computational systems biology approaches can be used to better understand cellular level function and guide future experiments. With pressure to quickly develop viable, renewable biofuel processes a balance must be maintained between obtaining depth of biological knowledge and applying that knowledge. Copyright © 2011 Elsevier Ltd. All rights reserved.
Bakshi, Mandeep Singh
2014-11-01
Target drug delivery methodology is becoming increasingly important to overcome the shortcomings of conventional drug delivery absorption method. It improves the action time with uniform distribution and poses minimum side effects, but is usually difficult to design to achieve the desire results. Economically favorable, environment friendly, multifunctional, and easy to design, hybrid nanomaterials have demonstrated their enormous potential as target drug delivery vehicles. A combination of both micelles and nanoparticles makes them fine target delivery vehicles in a variety of biological applications where precision is primarily required to achieve the desired results as in the case of cytotoxicity of cancer cells, chemotherapy, and computed tomography guided radiation therapy. Copyright © 2014 Elsevier B.V. All rights reserved.
Ebert, M A; Blight, J; Price, S; Haworth, A; Hamilton, C; Cornes, D; Joseph, D J
2004-09-01
Digital data from 3-D treatment planning computers is generally used for patient planning and then never considered again. However, such data contains enormous quantities of information regarding patient geometries, tissue outlining, treatment approaches and dose distributions. Were such data accessible from planning systems from multiple manufacturers, there would be substantial opportunities for undertaking quality assurance of radiotherapy clinical trials, prospective assessment of trial outcomes and basic treatment planning research and development. The technicalities of data exchange between planning systems are outlined, and previous attempts at producing systems capable of viewing and/or manipulating imaging and radiotherapy digital data reviewed. Development of a software system for enhancing the quality of Australasian clinical trials is proposed.
NASA Technical Reports Server (NTRS)
1987-01-01
Remote sensing is the process of acquiring physical information from a distance, obtaining data on Earth features from a satellite or an airplane. Advanced remote sensing instruments detect radiations not visible to the ordinary camera or the human eye in several bands of the spectrum. These data are computer processed to produce multispectral images that can provide enormous amounts of information about Earth objects or phenomena. Since every object on Earth emits or reflects radiation in its own unique signature, remote sensing data can be interpreted to tell the difference between one type of vegetation and another, between densely populated urban areas and lightly populated farmland, between clear and polluted water or in the archeological application between rain forest and hidden man made structures.
Computer optimization of reactor-thermoelectric space power systems
NASA Technical Reports Server (NTRS)
Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.
1973-01-01
A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.
A cognitive computational model inspired by the immune system response.
Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim
2014-01-01
The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective.
A Cognitive Computational Model Inspired by the Immune System Response
Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim
2014-01-01
The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective. PMID:25003131
New consumer load prototype for electricity theft monitoring
NASA Astrophysics Data System (ADS)
Abdullateef, A. I.; Salami, M. J. E.; Musse, M. A.; Onasanya, M. A.; Alebiosu, M. I.
2013-12-01
Illegal connection which is direct connection to the distribution feeder and tampering of energy meter has been identified as a major process through which nefarious consumers steal electricity on low voltage distribution system. This has contributed enormously to the revenue losses incurred by the power and energy providers. A Consumer Load Prototype (CLP) is constructed and proposed in this study in order to understand the best possible pattern through which the stealing process is effected in real life power consumption. The construction of consumer load prototype will facilitate real time simulation and data collection for the monitoring and detection of electricity theft on low voltage distribution system. The prototype involves electrical design and construction of consumer loads with application of various standard regulations from Institution of Engineering and Technology (IET), formerly known as Institution of Electrical Engineers (IEE). LABVIEW platform was used for data acquisition and the data shows a good representation of the connected loads. The prototype will assist researchers and power utilities, currently facing challenges in getting real time data for the study and monitoring of electricity theft. The simulation of electricity theft in real time is one of the contributions of this prototype. Similarly, the power and energy community including students will appreciate the practical approach which the prototype provides for real time information rather than software simulation which has hitherto been used in the study of electricity theft.
Brown, J B; Nakatsui, Masahiko; Okuno, Yasushi
2014-12-01
The cost of pharmaceutical R&D has risen enormously, both worldwide and in Japan. However, Japan faces a particularly difficult situation in that its population is aging rapidly, and the cost of pharmaceutical R&D affects not only the industry but the entire medical system as well. To attempt to reduce costs, the newly launched K supercomputer is available for big data drug discovery and structural simulation-based drug discovery. We have implemented both primary (direct) and secondary (infrastructure, data processing) methods for the two types of drug discovery, custom tailored to maximally use the 88 128 compute nodes/CPUs of K, and evaluated the implementations. We present two types of results. In the first, we executed the virtual screening of nearly 19 billion compound-protein interactions, and calculated the accuracy of predictions against publicly available experimental data. In the second investigation, we implemented a very computationally intensive binding free energy algorithm, and found that comparison of our binding free energies was considerably accurate when validated against another type of publicly available experimental data. The common feature of both result types is the scale at which computations were executed. The frameworks presented in this article provide prospectives and applications that, while tuned to the computing resources available in Japan, are equally applicable to any equivalent large-scale infrastructure provided elsewhere. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1995-01-01
The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.
Situation awareness and trust in computer-based procedures in nuclear power plant operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Throneburg, E. B.; Jones, J. M.
2006-07-01
Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presentedmore » that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)« less
The Ames Power Monitoring System
NASA Technical Reports Server (NTRS)
Osetinsky, Leonid; Wang, David
2003-01-01
The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.
Saving Energy and Money: A Lesson in Computer Power Management
ERIC Educational Resources Information Center
Lazaros, Edward J.; Hua, David
2012-01-01
In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
Computer program for afterheat temperature distribution for mobile nuclear power plant
NASA Technical Reports Server (NTRS)
Parker, W. G.; Vanbibber, L. E.
1972-01-01
ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.
NASA Astrophysics Data System (ADS)
Aharonov, Dorit
In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.
A feasibility study on porting the community land model onto accelerators using OpenACC
Wang, Dali; Wu, Wei; Winkler, Frank; ...
2014-01-01
As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less
Heat-Assisted Magnetic Recording: Fundamental Limits to Inverse Electromagnetic Design
NASA Astrophysics Data System (ADS)
Bhargava, Samarth
In this dissertation, we address the burgeoning fields of diffractive optics, metals-optics and plasmonics, and computational inverse problems in the engineering design of electromagnetic structures. We focus on the application of the optical nano-focusing system that will enable Heat-Assisted Magnetic Recording (HAMR), a higher density magnetic recording technology that will fulfill the exploding worldwide demand of digital data storage. The heart of HAMR is a system that focuses light to a nano- sub-diffraction-limit spot with an extremely high power density via an optical antenna. We approach this engineering problem by first discussing the fundamental limits of nano-focusing and the material limits for metal-optics and plasmonics. Then, we use efficient gradient-based optimization algorithms to computationally design shapes of 3D nanostructures that outperform human designs on the basis of mass-market product requirements. In 2014, the world manufactured ˜1 zettabyte (ZB), ie. 1 Billion terabytes (TBs), of data storage devices, including ˜560 million magnetic hard disk drives (HDDs). Global demand of storage will likely increase by 10x in the next 5-10 years, and manufacturing capacity cannot keep up with demand alone. We discuss the state-of-art HDD and why industry invented Heat-Assisted Magnetic Recording (HAMR) to overcome the data density limitations. HAMR leverages the temperature sensitivity of magnets, in which the coercivity suddenly and non-linearly falls at the Curie temperature. Data recording to high-density hard disks can be achieved by locally heating one bit of information while co-applying a magnetic field. The heating can be achieved by focusing 100 microW of light to a 30nm diameter spot on the hard disk. This is an enormous light intensity, roughly ˜100,000,000x the intensity of sunlight on the earth's surface! This power density is ˜1,000x the output of gold-coated tapered optical fibers used in Near-field Scanning Optical Microscopes (NSOM), which is the incumbent technology allowing the focus of light to the nano-scale. Even in these lower power NSOM probe tips, optical self-heating and deformation of the nano- gold tips are significant reliability and performance bottlenecks. Hence, the design and manufacture of the higher power optical nano-focusing system for HAMR must overcome great engineering challenges in optical and thermal performance. There has been much debate about alternative materials for metal-optics and plasmonics to cure the current plague of optical loss and thermal reliability in this burgeoning field. We clear the air. For an application like HAMR, where intense self-heating occurs, refractory metals and metals nitrides with high melting points but low optical and thermal conductivities are inferior to noble metals. This conclusion is contradictory to several claims and may be counter-intuitive to some, but the analysis is simple, evident and relevant to any engineer working on metal-optics and plasmonics. Indeed, the best metals for DC and RF electronics are also the best at optical frequencies. We also argue that the geometric design of electromagnetic structures (especially sub-wavelength devices) is too cumbersome for human designers, because the wave nature of light necessitates that this inverse problem be non-convex and non-linear. When the computation for one forward simulation is extremely demanding (hours on a high-performance computing cluster), typical designers constrain themselves to only 2 or 3 degrees of freedom. We attack the inverse electromagnetic design problem using gradient-based optimization after leveraging the adjoint-method to efficiently calculate the gradient (ie. the sensitivity) of an objective function with respect to thousands to millions of parameters. This approach results in creative computational designs of electromagnetic structures that human designers could not have conceived yet yield better optical performance. After gaining key insights from the fundamental limits and building our Inverse Electromagnetic Design software, we finally attempt to solve the challenges in enabling HAMR and the future supply of digital data storage hardware. In 2014, the hard disk industry spent ˜$200 million dollars in R&D but poor optical and thermal performance of the metallic nano-transducer continues to prevent commercial HAMR product. Via our design process, we successfully computationally-generated designs for the nano-focusing system that meets specifications for higher data density, lower adjacent track interference, lower laser power requirements and, most notably, lower self-heating of the crucial metallic nano-antenna. We believe that computational design will be a crucial component in commercial HAMR as well as many other commercially significant applications of micro- and nano- optics. If successful in commercializing HAMR, the hard disk industry may sell 1 billion HDDs per year by 2025, with an average of 6 semiconductor diode lasers and 6 optical chips per drive. The key players will become the largest manufacturers of integrated optical chips and nano-antennas in the world. This industry will perform millions of single-mode laser alignments per day. (Abstract shortened by UMI.).
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Other Planetary Systems: The View From Our Neighborhood
NASA Technical Reports Server (NTRS)
Cruikshank, Dale P.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
The structure and contents of the Solar System offer an initial model for other planetary systems in this and other galaxies. Our knowledge of the bodies in the Solar System and their physical conditions has grown enormously in the three decades of planetary exploration. Parallel to the uncovering of new facts has been a great expansion of our understanding of just how these conditions came to be. Telescopic studies and missions to all the planets (except Pluto) have shown spectacular and unexpected diversity among those planets, their satellites, the asteroids, and the comets. Highlights include the organic-rich crust of comets, volcanic activity on planetary satellites, randomly oriented magnetic fields of the major planets, the existence of a huge population of planetesimals just beyond Neptune, dramatic combinations of exogenic and endogenic forces shaping the solid bodies throughout the Solar System, and much more. Simultaneously, computational, laboratory, and conceptual advances have shown that the Solar System is not fully evolved either dynamically or chemically. The discovery of clearly identified interstellar (presolar) material in the meteorites and comets connects us directly with the matter in the molecular cloud from which the Solar System originated. At the same time, an increased understanding of the chemistry of comets and the impact history of the planets has demonstrated the dependence of the origin and evolution of life on Earth on powerful exogenic factors. This presentation summarizes some of the new knowledge of the Solar System and proposes specific character ist ics that may be observed in (or used as criteria for identification of) extrasolar planetary systems.
NASA Astrophysics Data System (ADS)
Gasior, P.
2014-11-01
Since the process of energy production in the stars has been identified as the thermonuclear fusion, this mechanism has been proclaimed as a future, extremely modern, reliable and safe for sustaining energetic needs of the humankind. However, the idea itself was rather straightforward and the first attempts to harness thermonuclear reactions have been taken yet in 40s of the twentieth century, it quickly appeared that physical and technical problems of domesticating exotic high temperature medium known as plasma are far from being trivial. Though technical developments as lasers, superconductors or advanced semiconductor electronics and computers gave significant contribution for the development of the thermonuclear fusion reactors, for a very long time their efficient performance was out of reach of technology. Years of the scientific progress brought the conclusions that for the development of the thermonuclear power plants an enormous interdisciplinary effort is needed in many fields of science covering not only plasma physics but also material research, superconductors, lasers, advanced diagnostic systems (e.g. spectroscopy, interferometry, scattering techniques, etc.) with huge amounts of data to be processed, cryogenics, measurement-control systems, automatics, robotics, nanotechnology, etc. Due to the sophistication of the problems with plasma control and plasma material interactions only such a combination of the research effort can give a positive output which can assure the energy needs of our civilization. In this paper the problems of thermonuclear technology are briefly outlined and it is shown why this domain can be a broad field for the experts dealing with electronics, optoelectronics, programming and numerical simulations, who at first glance can have nothing common with the plasma or nuclear physics.
2012-01-01
Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016
Energy-efficient neural information processing in individual neurons and neuronal networks.
Yu, Lianchun; Yu, Yuguo
2017-11-01
Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
1984-12-01
1980’s we are seeing enhancement of breadth, power, and accessibility of computers in many dimensions: o Pov~erfu1, costly fragile mainframes for...During the 1980’s we are seeing enhancement of breadth, power and accessibility of computers in many dimensions. (1) Powerful, costly, fragile mainframes... X A~ ’ EMORANDlUM FOR THE t-RAIRMAN, DEFENSE<. ’ ’...’"" S!B.FECT: Defense Science Board T is F- Supercomputei Applicai io, Yoi are requested to