DOT National Transportation Integrated Search
2017-06-30
The ever-increasing processing speed and computational power of computers and simulation systems has led to correspondingly larger, more sophisticated representations of evacuation traffic processes. Today, micro-level analyses can be conducted for m...
ERIC Educational Resources Information Center
White, Su
2007-01-01
Computer technology has been harnessed for education in UK universities ever since the first computers for research were installed at 10 selected sites in 1957. Subsequently, real costs have fallen dramatically. Processing power has increased; network and communications infrastructure has proliferated, and information has become unimaginably…
A Decade of Neural Networks: Practical Applications and Prospects
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.
1994-01-01
The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization.
A Technical Review of Cellular Radio and Analysis of a Possible Protocol
1992-09-01
9 1. The Pioneers ............. ................................... 9 2. Time line of Radio Evolution...cellular telephone. Advances in low-power radio transmission and the speed with which modern computers can aid in frequency management and signal...lecturer at the Royal Institution in London. He subsequently worked his way up to lecturer and devoted ever increasing amounts of time to experiments
On-chip phase-change photonic memory and computing
NASA Astrophysics Data System (ADS)
Cheng, Zengguang; Ríos, Carlos; Youngblood, Nathan; Wright, C. David; Pernice, Wolfram H. P.; Bhaskaran, Harish
2017-08-01
The use of photonics in computing is a hot topic of interest, driven by the need for ever-increasing speed along with reduced power consumption. In existing computing architectures, photonic data storage would dramatically improve the performance by reducing latencies associated with electrical memories. At the same time, the rise of `big data' and `deep learning' is driving the quest for non-von Neumann and brain-inspired computing paradigms. To succeed in both aspects, we have demonstrated non-volatile multi-level photonic memory avoiding the von Neumann bottleneck in the existing computing paradigm and a photonic synapse resembling the biological synapses for brain-inspired computing using phase-change materials (Ge2Sb2Te5).
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
Menzies, Kevin
2014-08-13
The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Earth Science Informatics Comes of Age
NASA Technical Reports Server (NTRS)
Jodha, Siri; Khalsa, S.; Ramachandran, Rahul
2014-01-01
The volume and complexity of Earth science data have steadily increased, placing ever-greater demands on researchers, software developers and data managers tasked with handling such data. Additional demands arise from requirements being levied by funding agencies and governments to better manage, preserve and provide open access to data. Fortunately, over the past 10-15 years significant advances in information technology, such as increased processing power, advanced programming languages, more sophisticated and practical standards, and near-ubiquitous internet access have made the jobs of those acquiring, processing, distributing and archiving data easier. These advances have also led to an increasing number of individuals entering the field of informatics as it applies to Geoscience and Remote Sensing. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of data, information, and knowledge. Informatics also encompasses the use of computers and computational methods to support decisionmaking and other applications for societal benefits.
The origins of computer weather prediction and climate modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Peter
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less
The origins of computer weather prediction and climate modeling
NASA Astrophysics Data System (ADS)
Lynch, Peter
2008-03-01
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
COMPUTER-AIDED DRUG DISCOVERY AND DEVELOPMENT (CADDD): in silico-chemico-biological approach
Kapetanovic, I.M.
2008-01-01
It is generally recognized that drug discovery and development are very time and resources consuming processes. There is an ever growing effort to apply computational power to the combined chemical and biological space in order to streamline drug discovery, design, development and optimization. In biomedical arena, computer-aided or in silico design is being utilized to expedite and facilitate hit identification, hit-to-lead selection, optimize the absorption, distribution, metabolism, excretion and toxicity profile and avoid safety issues. Commonly used computational approaches include ligand-based drug design (pharmacophore, a 3-D spatial arrangement of chemical features essential for biological activity), structure-based drug design (drug-target docking), and quantitative structure-activity and quantitative structure-property relationships. Regulatory agencies as well as pharmaceutical industry are actively involved in development of computational tools that will improve effectiveness and efficiency of drug discovery and development process, decrease use of animals, and increase predictability. It is expected that the power of CADDD will grow as the technology continues to evolve. PMID:17229415
Computer Competency of Nursing Students at a University in Thailand
ERIC Educational Resources Information Center
Niyomkar, Srimana
2012-01-01
During the past years, computer and information technology has been rapidly integrated into the education and healthcare fields. In the 21st century, computers are more powerful than ever, and are used in all aspects of nursing, including education, practice, policy, and research. Consequently, student nurses will need to utilize computer…
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Dense and Sparse Matrix Operations on the Cell Processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel W.; Shalf, John; Oliker, Leonid
2005-05-01
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less
Towards Scalable Graph Computation on Mobile Devices.
Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng
2014-10-01
Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.
Towards Scalable Graph Computation on Mobile Devices
Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng
2015-01-01
Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This issue of Continuum Magazine covers the depth and breadth of NREL's ever-expanding analytical capabilities. For example, in one project we are leading national efforts to create a computer model of one of the most complex systems ever built. This system, the eastern part of the North American power grid, will likely host an increasing percentage of renewable energy in years to come. Understanding how this system will work is important to its success - and NREL analysis is playing a major role. We are also identifying the connections among energy, the environment and the economy through analysis that willmore » point us toward a 'water smart' future.« less
Terascale Computing in Accelerator Science and Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Kwok
2002-08-21
We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aidedmore » engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.« less
Future in biomolecular computation
NASA Astrophysics Data System (ADS)
Wimmer, E.
1988-01-01
Large-scale computations for biomolecules are dominated by three levels of theory: rigorous quantum mechanical calculations for molecules with up to about 30 atoms, semi-empirical quantum mechanical calculations for systems with up to several hundred atoms, and force-field molecular dynamics studies of biomacromolecules with 10,000 atoms and more including surrounding solvent molecules. It can be anticipated that increased computational power will allow the treatment of larger systems of ever growing complexity. Due to the scaling of the computational requirements with increasing number of atoms, the force-field approaches will benefit the most from increased computational power. On the other hand, progress in methodologies such as density functional theory will enable us to treat larger systems on a fully quantum mechanical level and a combination of molecular dynamics and quantum mechanics can be envisioned. One of the greatest challenges in biomolecular computation is the protein folding problem. It is unclear at this point, if an approach with current methodologies will lead to a satisfactory answer or if unconventional, new approaches will be necessary. In any event, due to the complexity of biomolecular systems, a hierarchy of approaches will have to be established and used in order to capture the wide ranges of length-scales and time-scales involved in biological processes. In terms of hardware development, speed and power of computers will increase while the price/performance ratio will become more and more favorable. Parallelism can be anticipated to become an integral architectural feature in a range of computers. It is unclear at this point, how fast massively parallel systems will become easy enough to use so that new methodological developments can be pursued on such computers. Current trends show that distributed processing such as the combination of convenient graphics workstations and powerful general-purpose supercomputers will lead to a new style of computing in which the calculations are monitored and manipulated as they proceed. The combination of a numeric approach with artificial-intelligence approaches can be expected to open up entirely new possibilities. Ultimately, the most exciding aspect of the future in biomolecular computing will be the unexpected discoveries.
Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives
NASA Astrophysics Data System (ADS)
Sengupta, Abhronil; Roy, Kaushik
2018-03-01
“Spintronics” refers to the understanding of the physics of electron spin-related phenomena. While most of the significant advancements in this field has been driven primarily by memory, recent research has demonstrated that various facets of the underlying physics of spin transport and manipulation can directly mimic the functionalities of the computational primitives in neuromorphic computation, i.e., the neurons and synapses. Given the potential of these spintronic devices to implement bio-mimetic computations at very low terminal voltages, several spin-device structures have been proposed as the core building blocks of neuromorphic circuits and systems to implement brain-inspired computing. Such an approach is expected to play a key role in circumventing the problems of ever-increasing power dissipation and hardware requirements for implementing neuro-inspired algorithms in conventional digital CMOS technology. Perspectives on spin-enabled neuromorphic computing, its status, and challenges and future prospects are outlined in this review article.
An efficient approach for improving virtual machine placement in cloud computing environment
NASA Astrophysics Data System (ADS)
Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.
2017-11-01
The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.
Smartphones Based Mobile Mapping Systems
NASA Astrophysics Data System (ADS)
Al-Hamad, A.; El-Sheimy, N.
2014-06-01
The past 20 years have witnessed an explosive growth in the demand for geo-spatial data. This demand has numerous sources and takes many forms; however, the net effect is an ever-increasing thirst for data that is more accurate, has higher density, is produced more rapidly, and is acquired less expensively. For mapping and Geographic Information Systems (GIS) projects, this has been achieved through the major development of Mobile Mapping Systems (MMS). MMS integrate various navigation and remote sensing technologies which allow mapping from moving platforms (e.g. cars, airplanes, boats, etc.) to obtain the 3D coordinates of the points of interest. Such systems obtain accuracies that are suitable for all but the most demanding mapping and engineering applications. However, this accuracy doesn't come cheaply. As a consequence of the platform and navigation and mapping technologies used, even an "inexpensive" system costs well over 200 000 USD. Today's mobile phones are getting ever more sophisticated. Phone makers are determined to reduce the gap between computers and mobile phones. Smartphones, in addition to becoming status symbols, are increasingly being equipped with extended Global Positioning System (GPS) capabilities, Micro Electro Mechanical System (MEMS) inertial sensors, extremely powerful computing power and very high resolution cameras. Using all of these components, smartphones have the potential to replace the traditional land MMS and portable GPS/GIS equipment. This paper introduces an innovative application of smartphones as a very low cost portable MMS for mapping and GIS applications.
Reducing cooling energy consumption in data centres and critical facilities
NASA Astrophysics Data System (ADS)
Cross, Gareth
Given the rise of our everyday reliance on computers in all walks of life, from checking the train times to paying our credit card bills online, the need for computational power is ever increasing. Other than the ever-increasing performance of home Personal Computers (PC's) this reliance has given rise to a new phenomenon in the last 10 years ago. The data centre. Data centres contain vast arrays of IT cabinets loaded with servers that perform millions of computational equations every second. It is these data centres that allow us to continue with our reliance on the internet and the PC. As more and more data centres become necessary due to the increase in computing processing power required for the everyday activities we all take for granted so the energy consumed by these data centres rises. Not only are more and more data centres being constructed daily, but operators are also looking at ways to squeeze more processing from their existing data centres. This in turn leads to greater heat outputs and therefore requires more cooling. Cooling data centres requires a sizeable energy input, indeed to many megawatts per data centre site. Given the large amounts of money dependant on the successful operation of data centres, in particular for data centres operated by financial institutions, the onus is predominantly on ensuring the data centres operate with no technical glitches rather than in an energy conscious fashion. This report aims to investigate the ways and means of reducing energy consumption within data centres without compromising the technology the data centres are designed to house. As well as discussing the individual merits of the technologies and their implementation technical calculations will be undertaken where necessary to determine the levels of energy saving, if any, from each proposal. To enable comparison between each proposal any design calculations within this report will be undertaken against a notional data facility. This data facility will nominally be considered to require 1000 kW. Refer to Section 2.1 'Outline of Notional data Facility for Calculation Purposes' for details of the design conditions and constraints of the energy consumption calculations.
Connecting the virtual world of computers to the real world of medicinal chemistry.
Glen, Robert C
2011-03-01
Drug discovery involves the simultaneous optimization of chemical and biological properties, usually in a single small molecule, which modulates one of nature's most complex systems: the balance between human health and disease. The increased use of computer-aided methods is having a significant impact on all aspects of the drug-discovery and development process and with improved methods and ever faster computers, computer-aided molecular design will be ever more central to the discovery process.
Optical interconnects for satellite payloads: overview of the state-of-the-art
NASA Astrophysics Data System (ADS)
Vervaeke, Michael; Debaes, Christof; Van Erps, Jürgen; Karppinen, Mikko; Tanskanen, Antti; Aalto, Timo; Harjanne, Mikko; Thienpont, Hugo
2010-05-01
The increased demand of broadband communication services like High Definition Television, Video On Demand, Triple Play, fuels the technologies to enhance the bandwidth of individual users towards service providers and hence the increase of aggregate bandwidths on terrestial networks. Optical solutions clearly leverage the bandwidth appetite easily whereas electrical interconnection schemes require an ever-increasing effort to counteract signal distortions at higher bitrates. Dense wavelength division multiplexing and all-optical signal regeneration and switching solve the bandwidth demands of network trunks. Fiber-to-the-home, and fiber-to-the-desk are trends towards providing individual users with greatly increased bandwidth. Operators in the satellite telecommunication sector face similar challenges fuelled by the same demands as for their terrestial counterparts. Moreover, the limited number of orbital positions for new satellites set the trend for an increase in payload datacommunication capacity using an ever-increasing number of complex multi-beam active antennas and a larger aggregate bandwidth. Only satellites with very large capacity, high computational density and flexible, transparent fully digital payload solutions achieve affordable communication prices. To keep pace with the bandwidth and flexibility requirements, designers have to come up with systems requiring a total digital througput of a few Tb/s resulting in a high power consuming satellite payload. An estimated 90 % of the total power consumption per chip is used for the off-chip communication lines. We have undertaken a study to assess the viability of optical datacommunication solutions to alleviate the demands regarding power consumption and aggregate bandwidth imposed on future satellite communication payloads. The review on optical interconnects given here is especially focussed on the demands of the satellite communication business and the particular environment in which the optics have to perform their functionality: space.
Computational methods in drug discovery
Leelananda, Sumudu P
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341
Computational methods in drug discovery.
Leelananda, Sumudu P; Lindert, Steffen
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.
Developing software to use parallel processing effectively. Final report, June-December 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Center, J.
1988-10-01
This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less
High Performance, Dependable Multiprocessor
NASA Technical Reports Server (NTRS)
Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric;
2006-01-01
With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.
Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.
Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin
2015-01-01
Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.
The journey from forensic to predictive materials science using density functional theory
Schultz, Peter A.
2017-09-12
Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.
The journey from forensic to predictive materials science using density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultz, Peter A.
Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.
The Iterative Research Cycle: Process-Based Model Evaluation
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2014-12-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.
Use of Monte Carlo simulation for the interpretation and analysis of diffuse scattering
NASA Astrophysics Data System (ADS)
Welberry, T. R.; Chan, E. J.; Goossens, D. J.; Heerdegen, A. P.
2010-02-01
With the development of computer simulation methods there is, for the first time, the possibility of having a single general method that can be used for any diffuse scattering problem in any type of system. As computers get ever faster it is expected that current methods will become increasingly powerful and applicable to a wider and wider range of problems and materials and provide results in increasingly fine detail. In this article we discuss two contrasting recent examples. The first is concerned with the two polymorphic forms of the pharmaceutical compound benzocaine. The strong and highly structured diffuse scattering in these is shown to be symptomatic of the presence of highly correlated molecular motions. The second concerns Ag+ fast ion conduction in the pearceite/polybasite family of mineral solid electrolytes. Here Monte-Carlo simulation is used to model the diffuse scattering and gain insight into how the ionic conduction arises.
MD-11 PCA - First Landing at Edwards
NASA Technical Reports Server (NTRS)
1995-01-01
This McDonnell Douglas MD-11 approaches the first landing ever of a transport aircraft under engine power only on Aug. 29, 1995, at NASA's Dryden Flight Research Center, Edwards, California. The milestone flight, flown by NASA research pilot and former astronaut Gordon Fullerton, was part of a NASA project to develop a computer-assisted engine control system that enables a pilot to land a plane safely when it normal control surfaces are disabled. The Propulsion-Controlled Aircraft (PCA) system uses standard autopilot controls already present in the cockpit, together with the new programming in the aircraft's flight control computers. The PCA concept is simple--for pitch control, the program increases thrust to climb and reduces thrust to descend. To turn right, the autopilot increases the left engine thrust while decreasing the right engine thrust. The initial Propulsion-Controlled Aircraft studies by NASA were carried out at Dryden with a modified twin-engine F-15 research aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael; Lethin, Richard
Programming models and environments play the essential roles in high performance computing of enabling the conception, design, implementation and execution of science and engineering application codes. Programmer productivity is strongly influenced by the effectiveness of our programming models and environments, as is software sustainability since our codes have lifespans measured in decades, so the advent of new computing architectures, increased concurrency, concerns for resilience, and the increasing demands for high-fidelity, multi-physics, multi-scale and data-intensive computations mean that we have new challenges to address as part of our fundamental R&D requirements. Fortunately, we also have new tools and environments that makemore » design, prototyping and delivery of new programming models easier than ever. The combination of new and challenging requirements and new, powerful toolsets enables significant synergies for the next generation of programming models and environments R&D. This report presents the topics discussed and results from the 2014 DOE Office of Science Advanced Scientific Computing Research (ASCR) Programming Models & Environments Summit, and subsequent discussions among the summit participants and contributors to topics in this report.« less
ERIC Educational Resources Information Center
O'Hanlon, Charlene; Schaffhauser, Dian
2011-01-01
It's a perfect storm out there, with powerful forces remaking the IT landscape in higher education. On one side, devastating budget cuts are pushing IT departments to identify ever-greater cost savings. On the other, the explosion in mobile devices is pressuring IT to provide anytime, anywhere computing with no downtime. And finally there's…
A Practical Approach to Protein Crystallography.
Ilari, Andrea; Savino, Carmelinda
2017-01-01
Macromolecular crystallography is a powerful tool for structural biology. The resolution of a protein crystal structure is becoming much easier than in the past, thanks to developments in computing, automation of crystallization techniques and high-flux synchrotron sources to collect diffraction datasets. The aim of this chapter is to provide practical procedures to determine a protein crystal structure, illustrating the new techniques, experimental methods, and software that have made protein crystallography a tool accessible to a larger scientific community.It is impossible to give more than a taste of what the X-ray crystallographic technique entails in one brief chapter and there are different ways to solve a protein structure. Since the number of structures available in the Protein Data Bank (PDB) is becoming ever larger (the protein data bank now contains more than 100,000 entries) and therefore the probability to find a good model to solve the structure is ever increasing, we focus our attention on the Molecular Replacement method. Indeed, whenever applicable, this method allows the resolution of macromolecular structures starting from a single data set and a search model downloaded from the PDB, with the aid only of computer work.
Computational challenges of structure-based approaches applied to HIV.
Forli, Stefano; Olson, Arthur J
2015-01-01
Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Benny N.
2000-01-01
There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA-based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2013-12-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at increasingly finer spatial and temporal scales. Reconciling these system models with field and remote sensing data is a difficult task, particularly because average measures of model/data similarity inherently lack the power to provide a meaningful comparative evaluation of the consistency in model form and function. The very construction of the likelihood function - as a summary variable of the (usually averaged) properties of the error residuals - dilutes and mixes the available information into an index having little remaining correspondence to specific behaviors of the system (Gupta et al., 2008). The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh [2013] to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a much stronger and compelling diagnostic power than some aggregated measure of the size of the error residuals. Here, we will introduce an efficient ABC sampling method that is orders of magnitude faster in exploring the posterior parameter distribution than commonly used rejection and Population Monte Carlo (PMC) samplers. Our methodology uses Markov Chain Monte Carlo simulation with DREAM, and takes advantage of a simple computational trick to resolve discontinuity problems with the application of set-theoretic summary statistics. We will also demonstrate a set of summary statistics that are rather insensitive to errors in the forcing data. This enhances prospects of detecting model structural deficiencies.
Computing technology in the 1980's. [computers
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.
NASA Astrophysics Data System (ADS)
Telang, Aparna S.; Bedekar, P. P.
2017-09-01
Load flow analysis is the initial and essential step for any power system computation. It is required for choosing better options for power system expansion to meet with ever increasing load demand. Implementation of Flexible AC Transmission System (FACTS) device like STATCOM, in the load flow, which is having fast and very flexible control, is one of the important tasks for power system researchers. This paper presents a simple and systematic approach for steady state power flow calculations with FACTS controller, static synchronous compensator (STATCOM) using command line usage of MATLAB tool-power system analysis toolbox (PSAT). The complexity of MATLAB language programming increases due to incorporation of STATCOM in an existing Newton-Raphson load flow algorithm. Thus, the main contribution of this paper is to show how command line usage of user friendly MATLAB tool, PSAT, can extensively be used for quicker and wider interpretation of the results of load flow with STATCOM. The novelty of this paper lies in the method of applying the load increase pattern, where the active and reactive loads have been changed simultaneously at all the load buses under consideration for creating stressed conditions for load flow analysis with STATCOM. The performance have been evaluated on many standard IEEE test systems and the results for standard IEEE-30 bus system, IEEE-57 bus system, and IEEE-118 bus system are presented.
ECG R-R peak detection on mobile phones.
Sufi, F; Fang, Q; Cosic, I
2007-01-01
Mobile phones have become an integral part of modern life. Due to the ever increasing processing power, mobile phones are rapidly expanding its arena from a sole device of telecommunication to organizer, calculator, gaming device, web browser, music player, audio/video recording device, navigator etc. The processing power of modern mobile phones has been utilized by many innovative purposes. In this paper, we are proposing the utilization of mobile phones for monitoring and analysis of biosignal. The computation performed inside the mobile phone's processor will now be exploited for healthcare delivery. We performed literature review on RR interval detection from ECG and selected few PC based algorithms. Then, three of those existing RR interval detection algorithms were programmed on Java platform. Performance monitoring and comparison studies were carried out on three different mobile devices to determine their application on a realtime telemonitoring scenario.
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
Laboratory for energy smart systems (LESS).
DOT National Transportation Integrated Search
2016-12-01
The US power grid ageing fast and the societal and environmental pressures for clean energy are increasing more than ever. The ageing power infrastructure poses major limitations on energy reliability and resiliency, especially in lieu of recent extr...
Six Suggestions for Research on Games in Cognitive Science.
Chabris, Christopher F
2017-04-01
Games are more varied and occupy more of daily life than ever before. At the same time, the tools available to study game play and players are more powerful than ever, especially massive data sets from online platforms and computational engines that can accurately evaluate human decisions. This essay offers six suggestions for future cognitive science research on games: (1) Don't forget about chess, (2) Look beyond action games and chess, (3) Use (near)-optimal play to understand human play and players, (4) Investigate social phenomena, (5) Raise the standards for studies of games as treatments, (6) Talk to real experts. Copyright © 2017 Cognitive Science Society, Inc.
MD-11 PCA - View of aircraft on ramp
NASA Technical Reports Server (NTRS)
1995-01-01
This McDonnell Douglas MD-11 is taxiing to a position on the flightline at NASA's Dryden Flight Research Center, Edwards, California, following its completion of the first and second landings ever performed by a transport aircraft under engine power only (on Aug. 29, 1995). The milestone flight, with NASA research pilot and former astronaut Gordon Fullerton at the controls, was part of a NASA project to develop a computer-assisted engine control system that enables a pilot to land a plane safely when its normal control surfaces are disabled. The Propulsion-Controlled Aircraft (PCA) system uses standard autopilot controls already present in the cockpit, together with the new programming in the aircraft's flight control computers. The PCA concept is simple. For pitch control, the program increases thrust to climb and reduces thrust to descend. To turn right, the autopilot increases the left engine thrust while decreasing the right engine thrust. The initial Propulsion-Controlled Aircraft studies by NASA were carried out at Dryden with a modified twin-engine F-15 research aircraft.
MD-11 PCA - Closeup view of aircraft on ramp
NASA Technical Reports Server (NTRS)
1995-01-01
This McDonnell Douglas MD-11 has taxied to a position on the flightline at NASA's Dryden Flight Research Center, Edwards, California, following its completion of the first and second landings ever performed by a transport aircraft under engine power only (on Aug. 29, 1995). The milestone flight, with NASA research pilot and former astronaut Gordon Fullerton at the controls, was part of a NASA project to develop a computer-assisted engine control system that enables a pilot to land a plane safely when its normal control surfaces are disabled. The Propulsion-Controlled Aircraft (PCA) system uses standard autopilot controls already present in the cockpit, together with the new programming in the aircraft's flight control computers. The PCA concept is simple. For pitch control, the program increases thrust to climb and reduces thrust to descend. To turn right, the autopilot increases the left engine thrust while decreasing the right engine thrust. The initial Propulsion-Controlled Aircraft studies by NASA were carried out at Dryden with a modified twin-engine F-15 research aircraft.
Active Flash: Out-of-core Data Analytics on Flash Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shujia; Duffy, Daniel; Clune, Thomas
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less
Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duque, Earl P.N.; Whitlock, Brad J.
High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less
Photonics for aerospace sensors
NASA Astrophysics Data System (ADS)
Pellegrino, John; Adler, Eric D.; Filipov, Andree N.; Harrison, Lorna J.; van der Gracht, Joseph; Smith, Dale J.; Tayag, Tristan J.; Viveiros, Edward A.
1992-11-01
The maturation in the state-of-the-art of optical components is enabling increased applications for the technology. Most notable is the ever-expanding market for fiber optic data and communications links, familiar in both commercial and military markets. The inherent properties of optics and photonics, however, have suggested that components and processors may be designed that offer advantages over more commonly considered digital approaches for a variety of airborne sensor and signal processing applications. Various academic, industrial, and governmental research groups have been actively investigating and exploiting these properties of high bandwidth, large degree of parallelism in computation (e.g., processing in parallel over a two-dimensional field), and interconnectivity, and have succeeded in advancing the technology to the stage of systems demonstration. Such advantages as computational throughput and low operating power consumption are highly attractive for many computationally intensive problems. This review covers the key devices necessary for optical signal and image processors, some of the system application demonstration programs currently in progress, and active research directions for the implementation of next-generation architectures.
Computers for Manned Space Applications Base on Commercial Off-the-Shelf Components
NASA Astrophysics Data System (ADS)
Vogel, T.; Gronowski, M.
2009-05-01
Similar to the consumer markets there has been an ever increasing demand in processing power, signal processing capabilities and memory space also for computers used for science data processing in space. An important driver of this development have been the payload developers for the International Space Station, requesting high-speed data acquisition and fast control loops in increasingly complex systems. Current experiments now even perform video processing and compression with their payload controllers. Nowadays the requirements for a space qualified computer are often far beyond the capabilities of, for example, the classic SPARC architecture that is found in ERC32 or LEON CPUs. An increase in performance usually demands costly and power consuming application specific solutions. Continuous developments over the last few years have now led to an alternative approach that is based on complete electronics modules manufactured for commercial and industrial customers. Computer modules used in industrial environments with a high demand for reliability under harsh environmental conditions like chemical reactors, electrical power plants or on manufacturing lines are entered into a selection procedure. Promising candidates then undergo a detailed characterisation process developed by Astrium Space Transportation. After thorough analysis and some modifications, these modules can replace fully qualified custom built electronics in specific, although not safety critical applications in manned space. This paper focuses on the benefits of COTS1 based electronics modules and the necessary analyses and modifications for their utilisation in manned space applications on the ISS. Some considerations regarding overall systems architecture will also be included. Furthermore this paper will also pinpoint issues that render such modules unsuitable for specific tasks, and justify the reasons. Finally, the conclusion of this paper will advocate the implementation of COTS based electronics for a range of applications within specifically adapted systems. The findings in this paper are extrapolated from two reference computer systems, both having been launched in 2008. One of those was a LEON-2 based computer installed onboard the Columbus Orbital Facility while the other system consisted mainly of a commercial Power-PC module that was modified for a launch mounted on the ICC pallet in the Space Shuttle's cargo bay. Both systems are currently upgraded and extended for future applications.
What Is A Picture Archiving And Communication System (PACS)?
NASA Astrophysics Data System (ADS)
Marceau, Carla
1982-01-01
A PACS is a digital system for acquiring, storing, moving and displaying picture or image information. It is an alternative to film jackets that has been made possible by recent breakthroughs in computer technology: telecommunications, local area nets and optical disks. The fundamental concept of the digital representation of image information is introduced. It is shown that freeing images from a material representation on film or paper leads to a dramatic increase in flexibility in our use of the images. The ultimate goal of a medical PACS system is a radiology department without film jackets. The inherent nature of digital images and the power of the computer allow instant free "copies" of images to be made and thrown away. These copies can be transmitted to distant sites in seconds, without the "original" ever leaving the archives of the radiology department. The result is a radiology department with much freer access to patient images and greater protection against lost or misplaced image information. Finally, images in digital form can be treated as data for the computer in image processing, which includes enhancement, reconstruction and even computer-aided analysis.
Developing a New Computer Game Attitude Scale for Taiwanese Early Adolescents
ERIC Educational Resources Information Center
Liu, Eric Zhi-Feng; Lee, Chun-Yi; Chen, Jen-Huang
2013-01-01
With ever increasing exposure to computer games, gaining an understanding of the attitudes held by young adolescents toward such activities is crucial; however, few studies have provided scales with which to accomplish this. This study revisited the Computer Game Attitude Scale developed by Chappell and Taylor in 1997, reworking the overall…
Computer Aided Reading Diagnosis.
ERIC Educational Resources Information Center
McEneaney, John E.
Computer technologies are having an ever-increasing influence on educational research and practice in Russia and the United States. In Russia, a number of recent papers have focused on the application of the computer as a teaching tool and on its influence in instructional organization and planning. In the United States, there is a great deal of…
Fuel cells for low power applications
NASA Astrophysics Data System (ADS)
Heinzel, A.; Hebling, C.; Müller, M.; Zedda, M.; Müller, C.
Electronic devices show an ever-increasing power demand and thus, require innovative concepts for power supply. For a wide range of power and energy capacity, membrane fuel cells are an attractive alternative to conventional batteries. The main advantages are the flexibility with respect to power and capacity achievable with different devices for energy conversion and energy storage, the long lifetime and long service life, the good ecological balance, very low self-discharge. Therefore, the development of fuel cell systems for portable electronic devices is an attractive, although also a challenging, goal. The fuel for a membrane fuel cell might be hydrogen from a hydride storage system or methanol/water as a liquid alternative. The main differences between the two systems are the much higher power density for hydrogen fuel cells, the higher energy density per weight for the liquid fuel, safety aspects and infrastructure for fuel supply for hydride materials. For different applications, different system designs are required. High power cells are required for portable computers, low power methanol fuel cells required for mobile phones in hybrid systems with batteries and micro-fuel cells are required, e.g. for hand held PCs in the sub-Watt range. All these technologies are currently under development. Performance data and results of simulations and experimental investigations will be presented.
GREEN SUPERCOMPUTING IN A DESKTOP BOX
DOE Office of Scientific and Technical Information (OSTI.GOV)
HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY
2007-01-17
The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less
Transformational electronics: a powerful way to revolutionize our information world
NASA Astrophysics Data System (ADS)
Rojas, Jhonathan P.; Torres Sevilla, Galo A.; Ghoneim, Mohamed T.; Hussain, Aftab M.; Ahmed, Sally M.; Nassar, Joanna M.; Bahabry, Rabab R.; Nour, Maha; Kutbee, Arwa T.; Byas, Ernesto; Al-Saif, Bidoor; Alamri, Amal M.; Hussain, Muhammad M.
2014-06-01
With the emergence of cloud computation, we are facing the rising waves of big data. It is our time to leverage such opportunity by increasing data usage both by man and machine. We need ultra-mobile computation with high data processing speed, ultra-large memory, energy efficiency and multi-functionality. Additionally, we have to deploy energy-efficient multi-functional 3D ICs for robust cyber-physical system establishment. To achieve such lofty goals we have to mimic human brain, which is inarguably the world's most powerful and energy efficient computer. Brain's cortex has folded architecture to increase surface area in an ultra-compact space to contain its neuron and synapses. Therefore, it is imperative to overcome two integration challenges: (i) finding out a low-cost 3D IC fabrication process and (ii) foldable substrates creation with ultra-large-scale-integration of high performance energy efficient electronics. Hence, we show a low-cost generic batch process based on trench-protect-peel-recycle to fabricate rigid and flexible 3D ICs as well as high performance flexible electronics. As of today we have made every single component to make a fully flexible computer including non-planar state-of-the-art FinFETs. Additionally we have demonstrated various solid-state memory, movable MEMS devices, energy harvesting and storage components. To show the versatility of our process, we have extended our process towards other inorganic semiconductor substrates such as silicon germanium and III-V materials. Finally, we report first ever fully flexible programmable silicon based microprocessor towards foldable brain computation and wirelessly programmable stretchable and flexible thermal patch for pain management for smart bionics.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
Defining a New 21st Century Skill-Computational Thinking: Concepts and Trends
ERIC Educational Resources Information Center
Haseski, Halil Ibrahim; Ilic, Ulas; Tugtekin, Ufuk
2018-01-01
Computational Thinking is a skill that guides the 21th century individual in the problems experienced during daily life and it has an ever-increasing significance. Multifarious definitions were attempted to explain the concept of Computational Thinking. However, it was determined that there was no consensus on this matter in the literature and…
NASA Technical Reports Server (NTRS)
Marzwell, N. I.
2002-01-01
Economic Growth has been historically associated with nations that first made use of each new energy source. There is no doubt that Solar Power Satellites is high as a potential energy system for the future. A conceptual cost model of the economics value of space solar power (SSP) as a source of complementary power for in-space and ground applications will be discussed. Several financial analysis will be offered based on present and new technological innovations that may compete with or be complementary to present energy market suppliers depending on various institutional arrangements for government and the private sector in a Global Economy. Any of the systems based on fossil fuels such as coal, oil, natural gas, and synthetic fuels share the problem of being finite resources and are subject to ever-increasing cost as they grow ever more scarce with drastic increase in world population. Increasing world population and requirements from emerging underdeveloped countries will also increase overall demand. This paper would compare the future value of SSP with that of other terrestrial renewable energy in distinct geographic markets within the US, in developing countries, Europe, Asia, and Eastern Europe.
Too much information: visual research ethics in the age of wearable cameras.
Mok, Tze Ming; Cornish, Flora; Tarr, Jen
2015-06-01
When everything you see is data, what ethical principles apply? This paper argues that first-person digital recording technologies challenge traditional institutional approaches to research ethics, but that this makes ethics governance more important, not less so. We review evolving ethical concerns across four fields: Visual ethics; ubiquitous computing; mobile health; and grey literature from applied or market research. Collectively, these bodies of literature identify new challenges to traditional notions of informed consent, anonymity, confidentiality, privacy, beneficence and maleficence. Challenges come from the ever-increasing power, breadth and multi-functional integration of recording technologies, and the ubiquity and normalization of their use by participants. Some authors argue that these evolving relationships mean that institutional ethics governance procedures are irrelevant or no longer apply. By contrast, we argue that the fundamental principles of research ethics frameworks have become even more important for the protection of research participants, and that institutional frameworks need to adapt to keep pace with the ever-increasing power of recording technologies and the consequent risks to privacy. We conclude with four recommendations for efforts to ensure that contemporary visual recording research is held appropriately accountable to ethical standards: (i) minimizing the detail, scope, integration and retention of captured data, and limiting its accessibility; (ii) formulating an approach to ethics that takes in both the 'common rule' approaches privileging anonymity and confidentiality together with principles of contextual judgement and consent as an ongoing process; (iii) developing stronger ethical regulation of research outside academia; (iv) engaging the public and research participants in the development of ethical guidelines.
Broaching the Ship: Rethinking Submarines as a Signaling Tool in Naval Diplomacy
2015-03-01
late Nineteenth Century and the late Industrial Revolution, steam power, rifled barrels and steel armor supplanted sailing ships and smoothbore...them to not only construct steel ships but also innovate and incorporate news designs of all types. This period saw changes in battleship...entirely by steam. Improvements in gun design and propellant charges yielded ever-larger calibers of naval rifle with ever-increasing range. Steel
EnzyNet: enzyme classification using 3D convolutional neural networks on spatial representation
Amidi, Afshine; Megalooikonomou, Vasileios; Paragios, Nikos
2018-01-01
During the past decade, with the significant progress of computational power as well as ever-rising data availability, deep learning techniques became increasingly popular due to their excellent performance on computer vision problems. The size of the Protein Data Bank (PDB) has increased more than 15-fold since 1999, which enabled the expansion of models that aim at predicting enzymatic function via their amino acid composition. Amino acid sequence, however, is less conserved in nature than protein structure and therefore considered a less reliable predictor of protein function. This paper presents EnzyNet, a novel 3D convolutional neural networks classifier that predicts the Enzyme Commission number of enzymes based only on their voxel-based spatial structure. The spatial distribution of biochemical properties was also examined as complementary information. The two-layer architecture was investigated on a large dataset of 63,558 enzymes from the PDB and achieved an accuracy of 78.4% by exploiting only the binary representation of the protein shape. Code and datasets are available at https://github.com/shervinea/enzynet. PMID:29740518
EnzyNet: enzyme classification using 3D convolutional neural networks on spatial representation.
Amidi, Afshine; Amidi, Shervine; Vlachakis, Dimitrios; Megalooikonomou, Vasileios; Paragios, Nikos; Zacharaki, Evangelia I
2018-01-01
During the past decade, with the significant progress of computational power as well as ever-rising data availability, deep learning techniques became increasingly popular due to their excellent performance on computer vision problems. The size of the Protein Data Bank (PDB) has increased more than 15-fold since 1999, which enabled the expansion of models that aim at predicting enzymatic function via their amino acid composition. Amino acid sequence, however, is less conserved in nature than protein structure and therefore considered a less reliable predictor of protein function. This paper presents EnzyNet, a novel 3D convolutional neural networks classifier that predicts the Enzyme Commission number of enzymes based only on their voxel-based spatial structure. The spatial distribution of biochemical properties was also examined as complementary information. The two-layer architecture was investigated on a large dataset of 63,558 enzymes from the PDB and achieved an accuracy of 78.4% by exploiting only the binary representation of the protein shape. Code and datasets are available at https://github.com/shervinea/enzynet.
Griffee, Karen; Swindell, Sam; O'Keefe, Stephen L; Stroebel, Sandra S; Beard, Keith W; Kuo, Shih-Ya; Stroupe, Walter
2016-10-01
Retrospective data from 1,821 women and 1,064 men with one or more siblings, provided anonymously using a computer-assisted self-interview, were used to identify risk factors for sibling incest (SI); 137 were participants in SI. In order of decreasing predictive power, the risk factors identified by the multiple logistic regression analysis included ever having shared a bed for sleeping with a sibling, parent-child incest (PCI), family nudity, low levels of maternal affection, and ever having shared a tub bath with a sibling. The results were consistent with the idea that SI in many families was the cumulative result of four types of parental behaviors: (a) factors that lower external barriers to sexual behavior (e.g., permitting co-sleeping or co-bathing of sibling dyads), (b) factors that encourage nudity of children within the nuclear family and permit children to see the parent's genitals, (c) factors that lead to the siblings relying on one another for affection (e.g., diminished maternal affection), and (d) factors that eroticize young children (e.g., child sexual abuse [CSA] by a parent). Thirty-eight of the 137 SI participants were participants in coerced sibling incest (CSI). In order of decreasing predictive power, risk factors for CSI identified by multiple logistic regression analysis included ever having shared a bed for sleeping with a brother, PCI, witnessing parental physical fighting, and family nudity. SI was more likely to have been reported as CSI if the sibling had touched the reporting sibling's genitals, and less likely to have been reported as CSI if the siblings had shared a bed. © The Author(s) 2014.
A Pilot Computer-Aided Design and Manufacturing Curriculum that Promotes Engineering
NASA Technical Reports Server (NTRS)
2002-01-01
Elizabeth City State University (ECSU) is located in a community that is mostly rural in nature. The area is economically deprived when compared to the rest of the state. Many businesses lack the computerized equipment and skills needed to propel upward in today's technologically advanced society. This project will close the ever-widening gap between advantaged and disadvantaged workers as well as increase their participation with industry, NASA and/or other governmental agencies. Everyone recognizes computer technology as the catalyst for advances in design, prototyping, and manufacturing or the art of machining. Unprecedented quality control and cost-efficiency improvements are recognized through the use of computer technology. This technology has changed the manufacturing industry with advanced high-tech capabilities needed by NASA. With the ever-widening digital divide, we must continue to provide computer technology to those who are socio-economically disadvantaged.
Computational membrane biophysics: From ion channel interactions with drugs to cellular function.
Miranda, Williams E; Ngo, Van A; Perissinotti, Laura L; Noskov, Sergei Yu
2017-11-01
The rapid development of experimental and computational techniques has changed fundamentally our understanding of cellular-membrane transport. The advent of powerful computers and refined force-fields for proteins, ions, and lipids has expanded the applicability of Molecular Dynamics (MD) simulations. A myriad of cellular responses is modulated through the binding of endogenous and exogenous ligands (e.g. neurotransmitters and drugs, respectively) to ion channels. Deciphering the thermodynamics and kinetics of the ligand binding processes to these membrane proteins is at the heart of modern drug development. The ever-increasing computational power has already provided insightful data on the thermodynamics and kinetics of drug-target interactions, free energies of solvation, and partitioning into lipid bilayers for drugs. This review aims to provide a brief summary about modeling approaches to map out crucial binding pathways with intermediate conformations and free-energy surfaces for drug-ion channel binding mechanisms that are responsible for multiple effects on cellular functions. We will discuss post-processing analysis of simulation-generated data, which are then transformed to kinetic models to better understand the molecular underpinning of the experimental observables under the influence of drugs or mutations in ion channels. This review highlights crucial mathematical frameworks and perspectives on bridging different well-established computational techniques to connect the dynamics and timescales from all-atom MD and free energy simulations of ion channels to the physiology of action potentials in cellular models. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.
The Potential of Incorporating Computer Games in Foreign Language Curricula
ERIC Educational Resources Information Center
Mukundan, Jayakaran; Kalajahi, Seyed Ali Rezvani; Naghdipour, Bakhtiar
2014-01-01
There is ample evidence that technology-enhanced instruction could result in students' learning. With the advancement and ever-increasing growth of technology, the use of educational electronic games or computer games in education has appealed to both educators and students. Because of their potential to enhance students' interest, motivation and…
Using Computers in Distance Study: Results of a Survey amongst Disabled Distance Students.
ERIC Educational Resources Information Center
Ommerborn, Rainer; Schuemer, Rudolf
2002-01-01
In the euphoria about new technologies in distance education there exists the danger of not sufficiently considering how ever increasing "virtualization" may exclude some student groups. An explorative study was conducted that asked disabled students about their experiences with using computers and the Internet. Overall, those questioned…
Quantifying chemical uncertainties in simulations of the ISM
NASA Astrophysics Data System (ADS)
Glover, Simon
2018-06-01
The ever-increasing power of large parallel computers now makes it possible to include increasingly sophisticated chemical models in three-dimensional simulations of the interstellar medium (ISM). This allows us to study the role that chemistry plays in the thermal balance of a realistically-structured, turbulent ISM, as well as enabling us to generated detailed synthetic observations of important atomic or molecular tracers. However, one major constraint on the accuracy of these models is the accuracy with which the input chemical rate coefficients are known. Uncertainties in these chemical rate coefficients inevitably introduce uncertainties into the model predictions. In this talk, I will review some of the methods we can use to quantify these uncertainties and to identify the key reactions where improved chemical data is most urgently required. I will also discuss a few examples, ranging from the local ISM to the high-redshift universe.
Square Kilometre Array Science Data Processing
NASA Astrophysics Data System (ADS)
Nikolic, Bojan; SDP Consortium, SKA
2014-04-01
The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.
Testing and Validating Gadget2 for GPUs
NASA Astrophysics Data System (ADS)
Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.
2013-01-01
We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.
Photonic reservoir computing: a new approach to optical information processing
NASA Astrophysics Data System (ADS)
Vandoorne, Kristof; Fiers, Martin; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter
2010-06-01
Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently, advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks that has been successfully used in several pattern classification problems, like speech and image recognition. Thus far, most implementations have been in software, limiting their speed and power efficiency. Photonics could be an excellent platform for a hardware implementation of this concept because of its inherent parallelism and unique nonlinear behaviour. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. We propose using a network of coupled Semiconductor Optical Amplifiers (SOA) and show in simulation that it could be used as a reservoir by comparing it to conventional software implementations using a benchmark speech recognition task. In spite of the differences with classical reservoir models, the performance of our photonic reservoir is comparable to that of conventional implementations and sometimes slightly better. As our implementation uses coherent light for information processing, we find that phase tuning is crucial to obtain high performance. In parallel we investigate the use of a network of photonic crystal cavities. The coupled mode theory (CMT) is used to investigate these resonators. A new framework is designed to model networks of resonators and SOAs. The same network topologies are used, but feedback is added to control the internal dynamics of the system. By adjusting the readout weights of the network in a controlled manner, we can generate arbitrary periodic patterns.
ERIC Educational Resources Information Center
Ten Dyke, Richard P.
1982-01-01
A traditional question is whether or not computers shall ever think like humans. This question is redirected to a discussion of whether computers shall ever be truly creative. Creativity is defined and a program is described that is designed to complete creatively a series problem in mathematics. (MP)
Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.
NASA Astrophysics Data System (ADS)
Moon, Young-Hyun
Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.
Optical interconnection networks for high-performance computing systems
NASA Astrophysics Data System (ADS)
Biberman, Aleksandr; Bergman, Keren
2012-04-01
Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.
Web-based services for drug design and discovery.
Frey, Jeremy G; Bird, Colin L
2011-09-01
Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.
ERIC Educational Resources Information Center
McDermott, John J., Ed.
This course, developed for use in secondary and adult education, is an effort to describe the cost-benefit ratio of the various methods of generation of electrical power in an era when the requirement for additional sources of power is growing at an ever-increasing rate and environmental protection is a major concern. This course was written and…
Efficient flapping flight of pterosaurs
NASA Astrophysics Data System (ADS)
Strang, Karl Axel
In the late eighteenth century, humans discovered the first pterosaur fossil remains and have been fascinated by their existence ever since. Pterosaurs exploited their membrane wings in a sophisticated manner for flight control and propulsion, and were likely the most efficient and effective flyers ever to inhabit our planet. The flapping gait is a complex combination of motions that sustains and propels an animal in the air. Because pterosaurs were so large with wingspans up to eleven meters, if they could have sustained flapping flight, they would have had to achieve high propulsive efficiencies. Identifying the wing motions that contribute the most to propulsive efficiency is key to understanding pterosaur flight, and therefore to shedding light on flapping flight in general and the design of efficient ornithopters. This study is based on published results for a very well-preserved specimen of Coloborhynchus robustus, for which the joints are well-known and thoroughly described in the literature. Simplifying assumptions are made to estimate the characteristics that can not be inferred directly from the fossil remains. For a given animal, maximizing efficiency is equivalent to minimizing power at a given thrust and speed. We therefore aim at finding the flapping gait, that is the joint motions, that minimize the required flapping power. The power is computed from the aerodynamic forces created during a given wing motion. We develop an unsteady three-dimensional code based on the vortex-lattice method, which correlates well with published results for unsteady motions of rectangular wings. In the aerodynamic model, the rigid pterosaur wing is defined by the position of the bones. In the aeroelastic model, we add the flexibility of the bones and of the wing membrane. The nonlinear structural behavior of the membrane is reduced to a linear modal decomposition, assuming small deflections about the reference wing geometry. The reference wing geometry is computed for the membrane subject to glide loads and pretension from the wing joint positions. The flapping gait is optimized in a two-stage procedure. First the design space is explored using a binary genetic algorithm. The best design points are then used as starting points in a sequential quadratic programming optimization algorithm. This algorithm is used to refine the solutions by precisely satisfying the constraints. The refined solutions are found in generally less than twenty major iterations and constraints are violated generally by less than 0.1%. We find that the optimal motions are in agreement with previous results for simple wing motions. By adding joint motions, the required flapping power is reduced by 7% to 17%. Because of the large uncertainties for some estimates, we investigate the sensitivity of the optimized flapping gait. We find that the optimal motions are sensitive mainly to flight speed, body accelerations, and to the material properties of the wing membrane. The optimal flight speed found correlates well with other studies of pterosaur flapping flight, and is 31% to 37% faster than previous estimates based on glide performance. Accounting for the body accelerations yields an increase of 10% to 16% in required flapping power. When including the aeroelastic effects, the optimal flapping gait is only slightly modified to accommodate for the deflections of stiff membranes. For a flexible membrane, the motion is significantly modified and the power increased by up to 57%. Finally, the flapping gait and required power compare well with published results for similar wing motions. Some published estimates of required power assumed a propulsive efficiency of 100%, whereas the propulsive efficiency computed for Coloborhynchus robustus ranges between 54% and 87%.
Multi-Head Very High Power Strobe System For Motion Picture Special Effects
NASA Astrophysics Data System (ADS)
Lovoi, P. A.; Fink, Michael L.
1983-10-01
A very large camera synchronizable strobe system has been developed for motion picture special effects. This system, the largest ever built, was delivered to MGM/UA to be used in the movie "War Games". The system consists of 12 individual strobe heads and a power supply distribution system. Each strobe head operates independently and may be flashed up to 24 times per second under computer control. An energy of 480 Joules per flash is used in six strobe heads and 240 Joules per flash in the remaining six strobe heads. The beam pattern is rectangular with a FWHM of 60° x 48°.
Experimental Results for Temporally Overlapping Pulses from Quantel EverGreen 200 Laser
NASA Technical Reports Server (NTRS)
Watkins, A. Neal
2013-01-01
This report will detail the experimental results and observations obtained while investigating the feasibility of temporally overlapping the two laser pulses from a Quantel EverGreen 200 Laser. This laser was specifically designed for Particle Imaging Velocimetry (PIV) applications and operate by emitting two 532 nm laser pulses that are seperated by an adjustable finite time (typically on the order of ten to hundreds of microseconds). However, the use of this model laser has found recent application for Pressure Sensitive Paint (PSP) testing, especially for rotorcraft research. For this testing, it is desired to only use one laser pulse. While this is easily done by only firing one of the laser heads, more excitation energy could conceivably be had if both laser heads are fired with zero pulse separation. In addition, recently large field-of-view PIV measurements have become possible and need ever increasing laser power to illuminate the larger areas. For this work, two different methods of timing the laser are investigated using both a traditional power meter to monitor laser power as well as a fast photodiode to determine pulse separation. The results are presented here as well as some simple implications for PIV experiments using these methods.
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
Program of Basic Research in Distributed Tactical Decision Making.
1987-08-05
computer -simulated game representing a "space war" battle context were devised and two experiments were conducted to test some of the underlying...assume that advanced communication and computation of ever increasing capabilities will ensure successful group performance simply by improving the...There was a total of 12 subjects, three in each condition. 0 Apparatus A computer -controlled DTDM environment was developed using a VAX-I 1/750. The DTDM
NASA Technical Reports Server (NTRS)
Moravec, Hans
1993-01-01
Exploration and colonization of the universe awaits, but Earth-adapted biological humans are ill-equipped to respond to the challenge. Machines have gone farther and seen more, limited though they presently are by insect-like behavior inflexibility. As they become smarter over the coming decades, space will be theirs. Organizations of robots of ever increasing intelligence and sensory and motor ability will expand and transform what they occupy, working with matter, space and time. As they grow, a smaller and smaller fraction of their territory will be undeveloped frontier. Competitive success will depend more and more on using already available matter and space in ever more refined and useful forms. The process, analogous to the miniaturization that makes today's computers a trillion times more powerful than the mechanical calculators of the past, will gradually transform all activity from grossly physical homesteading of raw nature, to minimum-energy quantum transactions of computation. The final frontier will be urbanized, ultimately into an arena where every bit of activity is a meaningful computation: the inhabited portion of the universe will be transformed into a cyberspace. Because it will use resources more efficiently, a mature cyberspace of the distant future will be effectively much bigger than the present physical universe. While only an infinitesimal fraction of existing matter and space is doing interesting work, in a well developed cyberspace every bit will be part of a relevant computation or storing a useful datum. Over time, more compact and faster ways of using space and matter will be invented, and used to restructure the cyberspace, effectively increasing the amount of computational spacetime per unit of physical spacetime. Computational speed-ups will affect the subjective experience of entities in the cyberspace in a paradoxical way. At first glimpse, there is no subjective effect, because everything, inside and outside the individual, speeds up equally. But, more subtly, speed-up produces an expansion of the cyber universe, because, as thought accelerates, more subjective time passes during the fixed (probably lightspeed) physical transit time of a message between a given pair of locations - so those fixed locations seem to grow farther apart. Also, as information storage is made continually more efficient through both denser utilization of matter and more efficient encodings, there will be increasingly more cyber-stuff between any two points. The effect may somewhat resemble the continuous-creation process in the old steady-state theory of the physical universe of Hoyle, Bondi and Gold, where hydrogen atoms appear just fast enough throughout the expanding cosmos to maintain a constant density.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-27
...Power Wind Holdings, Inc.'s application for market-based rate authority, with an accompanying rate... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-3405-000] EverPower Wind Holdings, Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for...
Diversity in computing technologies and strategies for dynamic resource allocation
Garzoglio, G.; Gutsche, O.
2015-12-23
Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.
LOx / LCH4: A Unifying Technology for Future Exploration
NASA Technical Reports Server (NTRS)
Falker, John; Terrier, Douglas; Clayton, Ronald G.; Banker, Brian; Ryan, Abigail
2015-01-01
Reduced mass due to increasing commonality between spacecraft subsystems such as power and propulsion have been identified as critical to enabling human missions to Mars. This project represents the first ever integrated propulsion and power system testing and lays the foundations for future sounding rocket flight testing, which will yield the first in-space ignition of a LOx / LCH4 rocket engine.
Structural Weight Estimation for Launch Vehicles
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd
2002-01-01
This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.
SeqWare Query Engine: storing and searching sequence data in the cloud.
O'Connor, Brian D; Merriman, Barry; Nelson, Stanley F
2010-12-21
Since the introduction of next-generation DNA sequencers the rapid increase in sequencer throughput, and associated drop in costs, has resulted in more than a dozen human genomes being resequenced over the last few years. These efforts are merely a prelude for a future in which genome resequencing will be commonplace for both biomedical research and clinical applications. The dramatic increase in sequencer output strains all facets of computational infrastructure, especially databases and query interfaces. The advent of cloud computing, and a variety of powerful tools designed to process petascale datasets, provide a compelling solution to these ever increasing demands. In this work, we present the SeqWare Query Engine which has been created using modern cloud computing technologies and designed to support databasing information from thousands of genomes. Our backend implementation was built using the highly scalable, NoSQL HBase database from the Hadoop project. We also created a web-based frontend that provides both a programmatic and interactive query interface and integrates with widely used genome browsers and tools. Using the query engine, users can load and query variants (SNVs, indels, translocations, etc) with a rich level of annotations including coverage and functional consequences. As a proof of concept we loaded several whole genome datasets including the U87MG cell line. We also used a glioblastoma multiforme tumor/normal pair to both profile performance and provide an example of using the Hadoop MapReduce framework within the query engine. This software is open source and freely available from the SeqWare project (http://seqware.sourceforge.net). The SeqWare Query Engine provided an easy way to make the U87MG genome accessible to programmers and non-programmers alike. This enabled a faster and more open exploration of results, quicker tuning of parameters for heuristic variant calling filters, and a common data interface to simplify development of analytical tools. The range of data types supported, the ease of querying and integrating with existing tools, and the robust scalability of the underlying cloud-based technologies make SeqWare Query Engine a nature fit for storing and searching ever-growing genome sequence datasets.
SeqWare Query Engine: storing and searching sequence data in the cloud
2010-01-01
Background Since the introduction of next-generation DNA sequencers the rapid increase in sequencer throughput, and associated drop in costs, has resulted in more than a dozen human genomes being resequenced over the last few years. These efforts are merely a prelude for a future in which genome resequencing will be commonplace for both biomedical research and clinical applications. The dramatic increase in sequencer output strains all facets of computational infrastructure, especially databases and query interfaces. The advent of cloud computing, and a variety of powerful tools designed to process petascale datasets, provide a compelling solution to these ever increasing demands. Results In this work, we present the SeqWare Query Engine which has been created using modern cloud computing technologies and designed to support databasing information from thousands of genomes. Our backend implementation was built using the highly scalable, NoSQL HBase database from the Hadoop project. We also created a web-based frontend that provides both a programmatic and interactive query interface and integrates with widely used genome browsers and tools. Using the query engine, users can load and query variants (SNVs, indels, translocations, etc) with a rich level of annotations including coverage and functional consequences. As a proof of concept we loaded several whole genome datasets including the U87MG cell line. We also used a glioblastoma multiforme tumor/normal pair to both profile performance and provide an example of using the Hadoop MapReduce framework within the query engine. This software is open source and freely available from the SeqWare project (http://seqware.sourceforge.net). Conclusions The SeqWare Query Engine provided an easy way to make the U87MG genome accessible to programmers and non-programmers alike. This enabled a faster and more open exploration of results, quicker tuning of parameters for heuristic variant calling filters, and a common data interface to simplify development of analytical tools. The range of data types supported, the ease of querying and integrating with existing tools, and the robust scalability of the underlying cloud-based technologies make SeqWare Query Engine a nature fit for storing and searching ever-growing genome sequence datasets. PMID:21210981
Nurturing a growing field: Computers & Geosciences
NASA Astrophysics Data System (ADS)
Mariethoz, Gregoire; Pebesma, Edzer
2017-10-01
Computational issues are becoming increasingly critical for virtually all fields of geoscience. This includes the development of improved algorithms and models, strategies for implementing high-performance computing, or the management and visualization of the large datasets provided by an ever-growing number of environmental sensors. Such issues are central to scientific fields as diverse as geological modeling, Earth observation, geophysics or climatology, to name just a few. Related computational advances, across a range of geoscience disciplines, are the core focus of Computers & Geosciences, which is thus a truly multidisciplinary journal.
High resolution extremity CT for biomechanics modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashby, A.E.; Brand, H.; Hollerbach, K.
1995-09-23
With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling.
Largest Ever Wind Power Commitments Sets Clean Energy Example for Nation
'Largest Ever' Wind Power Commitments Sets Clean Energy Example for Nation NEWS MEDIA CONTACTS ;Federal agencies in Colorado are setting an example for the rest of the nation and the leaders of business
The equal load-sharing model of cascade failures in power grids
NASA Astrophysics Data System (ADS)
Scala, Antonio; De Sanctis Lucentini, Pier Giorgio
2016-11-01
Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing power demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ;super-grids;.
Neural simulations on multi-core architectures.
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.
Neural Simulations on Multi-Core Architectures
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Binary millisecond pulsar discovery via gamma-ray pulsations.
Pletsch, H J; Guillemot, L; Fehrmann, H; Allen, B; Kramer, M; Aulbert, C; Ackermann, M; Ajello, M; de Angelis, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Borgland, A W; Bottacini, E; Brandt, T J; Bregeon, J; Brigida, M; Bruel, P; Buehler, R; Buson, S; Caliandro, G A; Cameron, R A; Caraveo, P A; Casandjian, J M; Cecchi, C; Çelik, Ö; Charles, E; Chaves, R C G; Cheung, C C; Chiang, J; Ciprini, S; Claus, R; Cohen-Tanugi, J; Conrad, J; Cutini, S; D'Ammando, F; Dermer, C D; Digel, S W; Drell, P S; Drlica-Wagner, A; Dubois, R; Dumora, D; Favuzzi, C; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Fusco, P; Gargano, F; Gehrels, N; Germani, S; Giglietto, N; Giordano, F; Giroletti, M; Godfrey, G; Grenier, I A; Grondin, M-H; Grove, J E; Guiriec, S; Hadasch, D; Hanabata, Y; Harding, A K; den Hartog, P R; Hayashida, M; Hays, E; Hill, A B; Hou, X; Hughes, R E; Jóhannesson, G; Jackson, M S; Jogler, T; Johnson, A S; Johnson, W N; Kataoka, J; Kerr, M; Knödlseder, J; Kuss, M; Lande, J; Larsson, S; Latronico, L; Lemoine-Goumard, M; Longo, F; Loparco, F; Lovellette, M N; Lubrano, P; Massaro, F; Mayer, M; Mazziotta, M N; McEnery, J E; Mehault, J; Michelson, P F; Mitthumsiri, W; Mizuno, T; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Nakamori, T; Nemmen, R; Nuss, E; Ohno, M; Ohsugi, T; Omodei, N; Orienti, M; Orlando, E; de Palma, F; Paneque, D; Perkins, J S; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Ray, P S; Razzano, M; Reimer, A; Reimer, O; Reposeur, T; Ritz, S; Romani, R W; Romoli, C; Sanchez, D A; Saz Parkinson, P M; Schulz, A; Sgrò, C; do Couto e Silva, E; Siskind, E J; Smith, D A; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Tanaka, T; Thayer, J B; Thayer, J G; Thompson, D J; Tibaldo, L; Tinivella, M; Troja, E; Usher, T L; Vandenbroucke, J; Vasileiou, V; Vianello, G; Vitale, V; Waite, A P; Winer, B L; Wood, K S; Wood, M; Yang, Z; Zimmer, S
2012-12-07
Millisecond pulsars, old neutron stars spun up by accreting matter from a companion star, can reach high rotation rates of hundreds of revolutions per second. Until now, all such "recycled" rotation-powered pulsars have been detected by their spin-modulated radio emission. In a computing-intensive blind search of gamma-ray data from the Fermi Large Area Telescope (with partial constraints from optical data), we detected a 2.5-millisecond pulsar, PSR J1311-3430. This unambiguously explains a formerly unidentified gamma-ray source that had been a decade-long enigma, confirming previous conjectures. The pulsar is in a circular orbit with an orbital period of only 93 minutes, the shortest of any spin-powered pulsar binary ever found.
ERIC Educational Resources Information Center
Guimarães, Bruno; Ribeiro, José; Cruz, Bernardo; Ferreira, André; Alves, Hélio; Cruz-Correia, Ricardo; Madeira, Maria Dulce; Ferreira, Maria Amélia
2018-01-01
The time, material, and staff-consuming nature of anatomy's traditional pen-and-paper assessment system, the increase in the number of students enrolling in medical schools and the ever-escalating workload of academic staff have made the use of computer-based assessment (CBA) an attractive proposition. To understand the impact of such shift in the…
Achievements and challenges in structural bioinformatics and computational biophysics.
Samish, Ilan; Bourne, Philip E; Najmanovich, Rafael J
2015-01-01
The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. © The Author 2014. Published by Oxford University Press.
Achievements and challenges in structural bioinformatics and computational biophysics
Samish, Ilan; Bourne, Philip E.; Najmanovich, Rafael J.
2015-01-01
Motivation: The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. Results: An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. Conclusion: The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. Contact: Rafael.Najmanovich@USherbrooke.ca PMID:25488929
Mobile modeling in the molecular sciences
The art of modeling in the molecular sciences is highly dependent on both the available computational technology, underlying data, and ability to collaborate. With the ever increasing market share of mobile devices, it is assumed by many that tablets will overtake laptops as the...
Abruptness of Cascade Failures in Power Grids
NASA Astrophysics Data System (ADS)
Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio
2014-01-01
Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ``super-grids''.
Abruptness of cascade failures in power grids.
Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio
2014-01-15
Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into "super-grids".
Abruptness of Cascade Failures in Power Grids
Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio
2014-01-01
Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into “super-grids”. PMID:24424239
Creep of Hi-Nicalon S Fiber Tows at Elevated Temperature in Air and in Steam
2013-03-01
materials”[28]. Materials have always been a limiting factor in the advancements of technology. The ever increasing demand for aerospace vehicles that are...matrix composites are designed to have load-carrying capacity at high temperatures in extreme environments. Ceramic matrix composites are prime...engines, gas turbines for electrical power/steam cogeneration , as well as nuclear power plant components. It is recognized that the structural
Instruction in Pharmacokinetics: A Computer-Assisted Demonstration System
ERIC Educational Resources Information Center
Kahn, Norman; Bigger, J. Thomas
1974-01-01
The emerging discipline of clinical pharmacology is generating an ever increasing data base on the physiological disposition of a large number of drugs in man. Presents a system which would render this information readily understandable to students, regardless of their mathematical facility. (Author/PG)
From Awareness to Action: Determining the climate sensitivities that influence decision makers
NASA Astrophysics Data System (ADS)
Brown, C.
2017-12-01
Through the growth of computing power and analytical methods, a range of valuable and innovative tools allow the exhaustive exploration of a water system's response to a limitless set of scenarios. Similarly, possible adaptive actions can be evaluated across this broad set of possible futures. Finally, an ever increasing set of performance indicators is available to judge the relative value of a particular action over others. However, it's unclear whether this is improving the flow of actionable information or further cluttering it. This presentation will share lessons learned and other intuitions from a set of experiences engaging with public and private water managers and investors in the use of robustness-based climate vulnerability and adaptation analysis. Based on this background, a case for reductionism and focus on financial vulnerability will be forwarded. In addition, considerations for simpler, practical approaches for smaller water utilities will be discussed.
Multigigabit optical transceivers for high-data rate military applications
NASA Astrophysics Data System (ADS)
Catanzaro, Brian E.; Kuznia, Charlie
2012-01-01
Avionics has experienced an ever increasing demand for processing power and communication bandwidth. Currently deployed avionics systems require gigabit communication using opto-electronic transceivers connected with parallel optical fiber. Ultra Communications has developed a series of transceiver solutions combining ASIC technology with flip-chip bonding and advanced opto-mechanical molded optics. Ultra Communications custom high speed ASIC chips are developed using an SoS (silicon on sapphire) process. These circuits are flip chip bonded with sources (VCSEL arrays) and detectors (PIN diodes) to create an Opto-Electronic Integrated Circuit (OEIC). These have been combined with micro-optics assemblies to create transceivers with interfaces to standard fiber array (MT) cabling technology. We present an overview of the demands for transceivers in military applications and how new generation transceivers leverage both previous generation military optical transceivers as well as commercial high performance computing optical transceivers.
Gender, technology change and globalization: the case of China.
Guo, H; Zhao, M
1999-01-01
This paper reviews the experience of women workers in China while the country's economy is changing into a globalized, technologically advanced one. New computer-based technology is increasingly acknowledged as a powerful and pervasive force that can shape or, at least in many ways, affect employment. It is hailed for opening up fresh employment opportunities and reducing the physical stress involved in work. However, the possibilities of redundancies or intensification of workload also exist. By focusing on changes in women's work, the article reveals the contradictions inherent in following a development path based on ever-higher levels of technology in the context of an intensive mode of production, to which productivity is the core value. The economy is bolstered and some workers gain employment in expanding industries. However, workers, who lack access to training and who are reliant on the dwindling state support for their reproductive responsibilities, are marginalized and seek employment in the growing informal economy.
Facilitators and barriers to disclosing abuse among women with disabilities.
Curry, Mary Ann; Renker, Paula; Robinson-Whelen, Susan; Hughes, Rosemary B; Swank, Paul; Oschwald, Mary; Powers, Laurie E
2011-01-01
An anonymous audio computer-assisted self-interview (A-CASI) designed to increase awareness of abuse was completed by 305 women with diverse disabilities. Data were also collected about lifetime and past year abuse; perpetrator risk characteristics; facilitators and barriers to disclosing abuse; abuse disclosure to a health provider, case manager, or police officer; and whether a health provider had ever discussed abuse or personal safety. A total of 276 (90%) women reported abuse, 208 (68%) reported abuse within the past year. Women who reported the most abuse experiences in the past year and the most dangerous perpetrators endorsed fewer facilitators and more barriers, but were also more likely to have ever disclosed abuse. Only 15% reported that a health provider had ever discussed abuse and personal safety.
The evolving trend in spacecraft health analysis
NASA Technical Reports Server (NTRS)
Kirkpatrick, Russell L.
1993-01-01
The Space Flight Operations Center inaugurated the concept of a central data repository for spacecraft data and the distribution of computing power to the end users for that data's analysis at the Jet Propulsion Laboratory. The Advanced Multimission Operations System is continuing the evolution of this concept as new technologies emerge. Constant improvements in data management tools, data visualization, and hardware lead to ever expanding ideas for improving the analysis of spacecraft health in an era of budget constrained mission operations systems. The foundation of this evolution, its history, and its current plans will be discussed.
Adaptation of XMM-Newton SAS to GRID and VO architectures via web
NASA Astrophysics Data System (ADS)
Ibarra, A.; de La Calle, I.; Gabriel, C.; Salgado, J.; Osuna, P.
2008-10-01
The XMM-Newton Scientific Analysis Software (SAS) is a robust software that has allowed users to produce good scientific results since the beginning of the mission. This has been possible given the SAS capability to evolve with the advent of new technologies and adapt to the needs of the scientific community. The prototype of the Remote Interface for Science Analysis (RISA) presented here, is one such example, which provides remote analysis of XMM-Newton data with access to all the existing SAS functionality, while making use of GRID computing technology. This new technology has recently emerged within the astrophysical community to tackle the ever lasting problem of computer power for the reduction of large amounts of data.
USDA-ARS?s Scientific Manuscript database
The number of females genotyped in the US has increased to 12,650 per month, comprising 74% of the total genotypes received in 2013. Concerns of increased computing time of the ever-growing predictor population set and linkage decay between the ancestral population and the current animals have arise...
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
ExM:System Support for Extreme-Scale, Many-Task Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, Daniel S
The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastestmore » computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)« less
Polar lunar power ring: Propulsion energy resource
NASA Technical Reports Server (NTRS)
Galloway, Graham Scott
1990-01-01
A ring shaped grid of photovoltaic solar collectors encircling a lunar pole at 80 to 85 degrees latitude is proposed as the primary research, development, and construction goal for an initial lunar base. The polar Lunar Power Ring (LPR) is designed to provide continuous electrical power in ever increasing amounts as collectors are added to the ring grid. The LPR can provide electricity for any purpose indefinitely, barring a meteor strike. The associated rail infrastructure and inherently expandable power levels place the LPR as an ideal tool to power an innovative propulsion research facility or a trans-Jovian fleet. The proposed initial output range is 90 Mw to 90 Gw.
Space Station Power Generation in Support of the Beta Gimbal Anomaly Resolution
NASA Technical Reports Server (NTRS)
Delleur, Ann M.; Propp, Timothy W.
2003-01-01
The International Space Station (ISS) is the largest and most complex spacecraft ever assembled and operated in orbit. The first U.S. photovoltaic (PV) module, containing two solar arrays, was launched, installed, and activated in early December 2000. After the first week of continuously rotating the U.S. solar arrays, engineering personnel in the ISS Mission Evaluation Room (MER) observed higher than expected electrical currents on the drive motor in one of the Beta Gimbal Assemblies (BGA), the mechanism used to maneuver a U.S. solar array. The magnitude of the motor currents continued to increase over time on both BGA's, creating concerns about the ability of the gimbals to continue pointing the solar arrays towards the sun, a function critical for continued assembly of the ISS. A number of engineering disciplines convened in May 2001 to address this on-orbit hardware anomaly. This paper reviews the ISS electrical power system (EPS) analyses performed to develop viable operational workarounds that would minimize BGA use while maintaining sufficient solar array power to continue assembly of the ISS. Additionally, EPS analyses performed in support of on-orbit BGA troubleshooting exercises is reviewed. EPS capability analyses were performed using SPACE, a computer code developed by NASA Glenn Research Center (GRC) for the ISS program office.
Limits on fundamental limits to computation.
Markov, Igor L
2014-08-14
An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.
The first powerful steps in strategic development.
Chang, Y N; Platt, W
1986-11-01
In an ever-changing business climate, the need for strategically oriented management increases, for the organization that is insensitive and unresponsive to its environment will not survive. The authors of this "Action Plan" examine the challenges facing the health industry and outline a series of actions for initiating strategic-driven management.
Government, Coercive Power and the Perceived Legitimacy of Canadian Post-Secondary Institutions
ERIC Educational Resources Information Center
McQuarrie, Fiona A. E.; Kondra, Alex Z.; Lamertz, Kai
2013-01-01
Governments regulate and control organizations, yet their role in determining organizational legitimacy is largely unexamined. In the changing Canadian post-secondary landscape, legitimacy is an increasingly important issue for post-secondary institutions as they compete amongst themselves for access to ever-shrinking resources. Using an…
Show Me! Enhanced Feedback through Screencasting Technology
ERIC Educational Resources Information Center
Seror, Jeremie
2012-01-01
Technology is an ever-increasing part of how teachers and learners work on language and texts. Indeed, computers, the Internet, and Web 2.0 applications are revolutionizing how texts are consumed, discussed, and produced in classrooms. This article focuses on a specific technological innovation emerging from this digital revolution: the use of…
Long-Term Technology Planning: Laying the Foundation To Improve Illinois Schools.
ERIC Educational Resources Information Center
Barker, Bruce O.; Hall, Robert F.
This report provides guidelines for establishing a long-term technology plan for education, applicable to schools in all states. Advanced and emerging telecommunications and computer technologies have resulted in an ever increasing need for teachers and students to develop information processing and lifelong learning skills for gathering and…
New applications of x-ray tomography in pyrolysis of biomass: biochar imaging
USDA-ARS?s Scientific Manuscript database
We report on the first ever use of non-destructive micrometer-scale synchrotron computed microtomography for characterization of biochar materials as a function of pyrolysis temperature. Using this innovative approach we have observed an increase in marcropore fraction of the sample, resulting in 29...
Sun-Burned: Space Weather's Impact on United States National Security
NASA Astrophysics Data System (ADS)
Stebbins, B.
2014-12-01
The heightened media attention surrounding the 2013-14 solar maximum presented an excellent opportunity to examine the ever-increasing vulnerability of US national security and its Department of Defense to space weather. This vulnerability exists for three principal reasons: 1) a massive US space-based infrastructure; 2) an almost exclusive reliance on an aging and stressed continental US power grid; and 3) a direct dependence upon a US economy adapted to the conveniences of space and uninterrupted power. I tailored my research and work for the national security policy maker and military strategists in an endeavor to initiate and inform a substantive dialogue on America's preparation for, and response to, a major solar event that would severely degrade core national security capabilities, such as military operations. Significant risk to the Department of Defense exists from powerful events that could impact its space-based infrastructure and even the terrestrial power grid. Given this ever-present and increasing risk to the United States, my work advocates raising the issue of space weather and its impacts to the level of a national security threat. With the current solar cycle having already peaked and the next projected solar maximum just a decade away, the government has a relatively small window to make policy decisions that prepare the nation and its Defense Department to mitigate impacts from these potentially catastrophic phenomena.
Mobile Virtual Reality : A Solution for Big Data Visualization
NASA Astrophysics Data System (ADS)
Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.
2015-12-01
Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and analysis of the stone can be done remotely without ever seeing the real thing. This strategy can be game-changer for shoppers without having to go to the store.
Boxes of Model Building and Visualization.
Turk, Dušan
2017-01-01
Macromolecular crystallography and electron microscopy (single-particle and in situ tomography) are merging into a single approach used by the two coalescing scientific communities. The merger is a consequence of technical developments that enabled determination of atomic structures of macromolecules by electron microscopy. Technological progress in experimental methods of macromolecular structure determination, computer hardware, and software changed and continues to change the nature of model building and visualization of molecular structures. However, the increase in automation and availability of structure validation are reducing interactive manual model building to fiddling with details. On the other hand, interactive modeling tools increasingly rely on search and complex energy calculation procedures, which make manually driven changes in geometry increasingly powerful and at the same time less demanding. Thus, the need for accurate manual positioning of a model is decreasing. The user's push only needs to be sufficient to bring the model within the increasing convergence radius of the computing tools. It seems that we can now better than ever determine an average single structure. The tools work better, requirements for engagement of human brain are lowered, and the frontier of intellectual and scientific challenges has moved on. The quest for resolution of new challenges requires out-of-the-box thinking. A few issues such as model bias and correctness of structure, ongoing developments in parameters defining geometric restraints, limitations of the ideal average single structure, and limitations of Bragg spot data are discussed here, together with the challenges that lie ahead.
Binary Millisecond Pulsar Discovery via Gamma-Ray Pulsations
Pletsch, H. J.; Guillemot, L.; Fehrmann, H.; ...
2012-12-07
We present that millisecond pulsars, old neutron stars spun up by accreting matter from a companion star, can reach high rotation rates of hundreds of revolutions per second. Until now, all such “recycled” rotation-powered pulsars have been detected by their spin-modulated radio emission. In a computing-intensive blind search of gamma-ray data from the Fermi Large Area Telescope (with partial constraints from optical data), we detected a 2.5-millisecond pulsar, PSR J1311-3430. This unambiguously explains a formerly unidentified gamma-ray source that had been a decade-long enigma, confirming previous conjectures. Lastly, the pulsar is in a circular orbit with an orbital period ofmore » only 93 minutes, the shortest of any spin-powered pulsar binary ever found.« less
Computer hardware for radiologists: Part 2.
Indrajit, Ik; Alam, A
2010-11-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the 'ever increasing' digital future.
ERIC Educational Resources Information Center
Károly, Adrienn
2015-01-01
With an increasing emphasis on measuring the outcomes of learning in higher education, assessment is gaining an ever more prominent role in curriculum design and development as well as in instructional practices. In formative assessment, feedback is regarded as a powerful pedagogical tool driving student engagement and deep learning. The efficacy…
ADHD in Young Boys: A Correlational Study among Early Childhood Educators in Louisiana
ERIC Educational Resources Information Center
Stubbs, Jessica Hart
2012-01-01
Attention Deficit Hyperactivity Disorder is a psychiatric condition that has been increasingly diagnosed in young American children, with boys being diagnosed three times more frequently than their female peers. As a result, more children than ever are being treated with powerful stimulant medications which can have various desired and undesired…
What Learners "Know" through Digital Media Production: Learning by Design
ERIC Educational Resources Information Center
Mills, Kathy A.
2010-01-01
The power to influence others in ever expanding social networks in the new knowledge economy is tied to capabilities with digital media production that require increased technological knowledge. This article draws on research in primary classrooms to examine the repertoires of cross-disciplinary knowledge that literacy learners need to produce…
ERIC Educational Resources Information Center
Mohan, Marguerite A.; May, Nicole; Assaf-Anid, Nada M.; Castaldi, Marco J.
2006-01-01
The ever-increasing global demand for energy has sparked renewed interest within the engineering community in the study of sustainable alternative energy sources. This paper discusses a power generation system which uses biomass as "fuel" to illustrate the concepts taught to students taking a graduate level chemical engineering process…
A Local Vision on Soil Hydrology (John Dalton Medal Lecture)
NASA Astrophysics Data System (ADS)
Roth, K.
2012-04-01
After shortly looking back to some research trails of the past decades, and touching on the role of soils in our environmental machinery, a vision on the future of soil hydrology is offered. It is local in the sense of being based on limited experience as well as in the sense of focussing on local spatial scales, from 1 m to 1 km. Cornerstones of this vision are (i) rapid developments of quantitative observation technology, illustrated with the example of ground-penetrating radar (GPR), and (ii) the availability of ever more powerful compute facilities which allow to simulate increasingly complicated model representations in unprecedented detail. Together, they open a powerful and flexible approach to the quantitative understanding of soil hydrology where two lines are fitted: (i) potentially diverse measurements of the system of interest and their analysis and (ii) a comprehensive model representation, including architecture, material properties, forcings, and potentially unknown aspects, together with the same analysis as for (i). This approach pushes traditional inversion to operate on analyses, not on the underlying state variables, and to become flexible with respect to architecture and unknown aspects. The approach will be demonstrated for simple situations at test sites.
NASA Astrophysics Data System (ADS)
Unke, Oliver T.; Meuwly, Markus
2018-06-01
Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.
Correctional Education: Methods and Practices in the Computer Age.
ERIC Educational Resources Information Center
Dobbs, Ralph
It is suggested that correctional educational programs for adults must be designed in such a manner as to rehabilitate the many who are presently incarcerated and prevent many potential perpetrators from ever engaging in crime. The continually increasing problem of overcrowding in prisons throughout the country has made the need for relevant and…
Cybersecurity Education: Bridging the Gap between Hardware and Software Domains
ERIC Educational Resources Information Center
Lukowiak, Marcin; Radziszowski, Stanislaw; Vallino, James; Wood, Christopher
2014-01-01
With the continuous growth of cyberinfrastructure throughout modern society, the need for secure computing and communication is more important than ever before. As a result, there is also an increasing need for entry-level developers who are capable of designing and building practical solutions for systems with stringent security requirements.…
WLANs for the 21st Century Library
ERIC Educational Resources Information Center
Calamari, Cal
2009-01-01
As educational and research needs have changed, libraries have changed as well. They must meet ever-increasing demand for access to online media, subscriptions to archives, video, audio, and other content. The way a user/patron accesses this information has also changed. Gone are the days of a few hardwired desktops or computer carts. While…
Negative Stress Margins - Are They Real?
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Lee, Darlene S.; Mohaghegh, Michael
2011-01-01
Advances in modeling and simulation, new finite element software, modeling engines and powerful computers are providing opportunities to interrogate designs in a very different manner and in a more detailed approach than ever before. Margins of safety are also often evaluated using local stresses for various design concepts and design parameters quickly once analysis models are defined and developed. This paper suggests that not all the negative margins of safety evaluated are real. The structural areas where negative margins are frequently encountered are often near stress concentrations, point loads and load discontinuities, near locations of stress singularities, in areas having large gradients but with insufficient mesh density, in areas with modeling issues and modeling errors, and in areas with connections and interfaces, in two-dimensional (2D) and three-dimensional (3D) transitions, bolts and bolt modeling, and boundary conditions. Now, more than ever, structural analysts need to examine and interrogate their analysis results and perform basic sanity checks to determine if these negative margins are real.
'Stranger' child-murder: issues relating to causes and controls.
Wilson, P R
1988-02-01
Most industrialised countries are concerned with a perceived increase in the killing of children and adolescents by strangers. Though reliable statistics are lacking, the growth of serial murder suggests that more young persons may be at risk than ever before. Explanations, either of a psychological or sociological kind, of child murder by strangers are inadequately developed. Despite the tendency to see such killers as psychiatrically ill a number of studies suggest that the majority of offenders do not differ significantly, at least in psychological traits, from non-offenders. Subcultural and other sociological perspectives stressing "social disadvantage" have low levels of exploratory power and do not assist greatly in understanding child killings. Despite sketchy and contradictory evidence on the effects of the media on sexual and violent crime case study material supports the view that pornography, including popular music, may increase the propensity of individuals to commit atrocities. Counter-measures to control stranger child killing lie in more sophisticated law enforcement (profiling and computer links between police forces) long periods of incarceration of the offender and more sophisticated analyses of the crimes.
Implementation of ADI: Schemes on MIMD parallel computers
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1993-01-01
In order to simulate the effects of the impingement of hot exhaust jets of High Performance Aircraft on landing surfaces a multi-disciplinary computation coupling flow dynamics to heat conduction in the runway needs to be carried out. Such simulations, which are essentially unsteady, require very large computational power in order to be completed within a reasonable time frame of the order of an hour. Such power can be furnished by the latest generation of massively parallel computers. These remove the bottleneck of ever more congested data paths to one or a few highly specialized central processing units (CPU's) by having many off-the-shelf CPU's work independently on their own data, and exchange information only when needed. During the past year the first phase of this project was completed, in which the optimal strategy for mapping an ADI-algorithm for the three dimensional unsteady heat equation to a MIMD parallel computer was identified. This was done by implementing and comparing three different domain decomposition techniques that define the tasks for the CPU's in the parallel machine. These implementations were done for a Cartesian grid and Dirichlet boundary conditions. The most promising technique was then used to implement the heat equation solver on a general curvilinear grid with a suite of nontrivial boundary conditions. Finally, this technique was also used to implement the Scalar Penta-diagonal (SP) benchmark, which was taken from the NAS Parallel Benchmarks report. All implementations were done in the programming language C on the Intel iPSC/860 computer.
Cost Optimization Model for Business Applications in Virtualized Grid Environments
NASA Astrophysics Data System (ADS)
Strebel, Jörg
The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.
Using the Tower of Hanoi puzzle to infuse your mathematics classroom with computer science concepts
NASA Astrophysics Data System (ADS)
Marzocchi, Alison S.
2016-07-01
This article suggests that logic puzzles, such as the well-known Tower of Hanoi puzzle, can be used to introduce computer science concepts to mathematics students of all ages. Mathematics teachers introduce their students to computer science concepts that are enacted spontaneously and subconsciously throughout the solution to the Tower of Hanoi puzzle. These concepts include, but are not limited to, conditionals, iteration, and recursion. Lessons, such as the one proposed in this article, are easily implementable in mathematics classrooms and extracurricular programmes as they are good candidates for 'drop in' lessons that do not need to fit into any particular place in the typical curriculum sequence. As an example for readers, the author describes how she used the puzzle in her own Number Sense and Logic course during the federally funded Upward Bound Math/Science summer programme for college-intending low-income high school students. The article explains each computer science term with real-life and mathematical examples, applies each term to the Tower of Hanoi puzzle solution, and describes how students connected the terms to their own solutions of the puzzle. It is timely and important to expose mathematics students to computer science concepts. Given the rate at which technology is currently advancing, and our increased dependence on technology in our daily lives, it has become more important than ever for children to be exposed to computer science. Yet, despite the importance of exposing today's children to computer science, many children are not given adequate opportunity to learn computer science in schools. In the United States, for example, most students finish high school without ever taking a computing course. Mathematics lessons, such as the one described in this article, can help to make computer science more accessible to students who may have otherwise had little opportunity to be introduced to these increasingly important concepts.
Image analysis in cytology: DNA-histogramming versus cervical smear prescreening.
Bengtsson, E W; Nordin, B
1993-01-01
The visual inspection of cellular specimens and histological sections through a light microscope plays an important role in clinical medicine and biomedical research. The human visual system is very good at the recognition of various patterns but less efficient at quantitative assessment of these patterns. Some samples are prepared in great numbers, most notably the screening for cervical cancer, the so-called PAP-smears, which results in hundreds of millions of samples each year, creating a tedious mass inspection task. Numerous attempts have been made over the last 40 years to create systems that solve these two tasks, the quantitative supplement to the human visual system and the automation of mass screening. The most difficult task, the total automation, has received the greatest attention with many large scale projects over the decades. In spite of all these efforts, still no generally accepted automated prescreening device exists on the market. The main reason for this failure is the great pattern recognition capabilities needed to distinguish between cancer cells and all other kinds of objects found in the specimens: cellular clusters, debris, degenerate cells, etc. Improved algorithms, the ever-increasing processing power of computers and progress in biochemical specimen preparation techniques make it likely that eventually useful automated prescreening systems will become available. Meanwhile, much less effort has been put into the development of interactive cell image analysis systems. Still, some such systems have been developed and put into use at thousands of laboratories worldwide. In these the human pattern recognition capability is used to select the fields and objects that are to be analysed while the computational power of the computer is used for the quantitative analysis of cellular DNA content or other relevant markers. Numerous studies have shown that the quantitative information about the distribution of cellular DNA content is of prognostic significance in many types of cancer. Several laboratories are therefore putting these techniques into routine clinical use. The more advanced systems can also study many other markers and cellular features, some known to be of clinical interest, others useful in research. The advances in computer technology are making these systems more generally available through decreasing cost, increasing computational power and improved user interfaces. We have been involved in research and development of both automated and interactive cell analysis systems during the last 20 years. Here some experiences and conclusions from this work will be presented as well as some predictions about what can be expected in the near future.
Advanced development of a programmable power processor
NASA Technical Reports Server (NTRS)
Lukens, F. E.; Lanier, J. R., Jr.; Kapustka, R. E.; Graves, J.
1980-01-01
The need for the development of a multipurpose flexible programmable power processor (PPP) has increased significantly in recent years to reduce ever rising development costs. One of the program requirements the PPP specification will cover is the 25 kW power module power conversion needs. The 25 kW power module could support the Space Shuttle program during the 1980s and 1990s and could be the stepping stone to future large space programs. Trades that led to selection of a microprocessor controlled power processor are briefly discussed. Emphasis is given to the power processing equipment that uses a microprocessor to provide versatility that allows multiple use and to provide for future growth by reprogramming output voltage to a higher level (to 120 V from 30 V). Component selection and design considerations are also discussed.
Impact evaluation of conducted UWB transients on loads in power-line networks
NASA Astrophysics Data System (ADS)
Li, Bing; Månsson, Daniel
2017-09-01
Nowadays, faced with the ever-increasing dependence on diverse electronic devices and systems, the proliferation of potential electromagnetic interference (EMI) becomes a critical threat for reliable operation. A typical issue is the electronics working reliably in power-line networks when exposed to electromagnetic environment. In this paper, we consider a conducted ultra-wideband (UWB) disturbance, as an example of intentional electromagnetic interference (IEMI) source, and perform the impact evaluation at the loads in a network. With the aid of fast Fourier transform (FFT), the UWB transient is characterized in the frequency domain. Based on a modified Baum-Liu-Tesche (BLT) method, the EMI received at the loads, with complex impedance, is computed. Through inverse FFT (IFFT), we obtain time-domain responses of the loads. To evaluate the impact on loads, we employ five common, but important quantifiers, i.e., time-domain peak, total signal energy, peak signal power, peak time rate of change and peak time integral of the pulse. Moreover, to perform a comprehensive analysis, we also investigate the effects of the attributes (capacitive, resistive, or inductive) of other loads connected to the network, the rise time and pulse width of the UWB transient, and the lengths of power lines. It is seen that, for the loads distributed in a network, the impact evaluation of IEMI should be based on the characteristics of the IEMI source, and the network features, such as load impedances, layout, and characteristics of cables.
Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service
NASA Astrophysics Data System (ADS)
Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.;
2017-10-01
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.
An, Gary; Christley, Scott
2012-01-01
Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.
NASA Astrophysics Data System (ADS)
Stacey, Weston M.
2001-02-01
An authoritative textbook and up-to-date professional's guide to basic and advanced principles and practices Nuclear reactors now account for a significant portion of the electrical power generated worldwide. At the same time, the past few decades have seen an ever-increasing number of industrial, medical, military, and research applications for nuclear reactors. Nuclear reactor physics is the core discipline of nuclear engineering, and as the first comprehensive textbook and reference on basic and advanced nuclear reactor physics to appear in a quarter century, this book fills a large gap in the professional literature. Nuclear Reactor Physics is a textbook for students new to the subject, for others who need a basic understanding of how nuclear reactors work, as well as for those who are, or wish to become, specialists in nuclear reactor physics and reactor physics computations. It is also a valuable resource for engineers responsible for the operation of nuclear reactors. Dr. Weston Stacey begins with clear presentations of the basic physical principles, nuclear data, and computational methodology needed to understand both the static and dynamic behaviors of nuclear reactors. This is followed by in-depth discussions of advanced concepts, including extensive treatment of neutron transport computational methods. As an aid to comprehension and quick mastery of computational skills, he provides numerous examples illustrating step-by-step procedures for performing the calculations described and chapter-end problems. Nuclear Reactor Physics is a useful textbook and working reference. It is an excellent self-teaching guide for research scientists, engineers, and technicians involved in industrial, research, and military applications of nuclear reactors, as well as government regulators who wish to increase their understanding of nuclear reactors.
Munabi, Ian G; Buwembo, William; Bajunirwe, Francis; Kitara, David Lagoro; Joseph, Ruberwa; Peter, Kawungezi; Obua, Celestino; Quinn, John; Mwaka, Erisa S
2015-02-25
Effective utilization of computers and their applications in medical education and research is of paramount importance to students. The objective of this study was to determine the association between owning a computer and use of computers for research data analysis and the other factors influencing health professions students' computer use for data analysis. We conducted a cross sectional study among undergraduate health professions students at three public universities in Uganda using a self-administered questionnaire. The questionnaire was composed of questions on participant demographics, students' participation in research, computer ownership, and use of computers for data analysis. Descriptive and inferential statistics (uni-variable and multi- level logistic regression analysis) were used to analyse data. The level of significance was set at 0.05. Six hundred (600) of 668 questionnaires were completed and returned (response rate 89.8%). A majority of respondents were male (68.8%) and 75.3% reported owning computers. Overall, 63.7% of respondents reported that they had ever done computer based data analysis. The following factors were significant predictors of having ever done computer based data analysis: ownership of a computer (adj. OR 1.80, p = 0.02), recently completed course in statistics (Adj. OR 1.48, p =0.04), and participation in research (Adj. OR 2.64, p <0.01). Owning a computer, participation in research and undertaking courses in research methods influence undergraduate students' use of computers for research data analysis. Students are increasingly participating in research, and thus need to have competencies for the successful conduct of research. Medical training institutions should encourage both curricular and extra-curricular efforts to enhance research capacity in line with the modern theories of adult learning.
Equalisation or Inflation? Social Class and Gender Differentials in England and Wales
ERIC Educational Resources Information Center
Sullivan, Alice; Heath, Anthony; Rothon, Catherine
2011-01-01
The Labour government elected in 1997, which lost power in 2010, was the longest serving Labour administration Britain has ever had. This period saw an enormous expansion of further and higher education, and an increase in the proportion of students achieving school-level qualifications. But have inequalities diminished as a result? We examine the…
Older Computer-Literate Women: Their Motivations, Obstacles, and Paths to Success
ERIC Educational Resources Information Center
Rosenthal, Rita L.
2008-01-01
With the ever-increasing impact of computerized communication and information delivery, the need to encourage learning about technology is critical for the older population today as well as for soon-to-be retirees. Adler (1996, 2002, 2003) has described and defined the key benefits to seniors: enhanced communication with family and friends,…
ERIC Educational Resources Information Center
Orsini, Gabriele
2015-01-01
The ever-increasing impact of molecular quantum calculations over chemical sciences implies a strong and urgent need for the elaboration of proper teaching strategies in university curricula. In such perspective, this paper proposes an extensive project for a student-driven, cooperative, from-scratch implementation of a general Hartree-Fock…
Development of an Augmented Reality Game to Teach Abstract Concepts in Food Chemistry
ERIC Educational Resources Information Center
Crandall, Philip G.; Engler, Robert K.; Beck, Dennis E.; Killian, Susan A.; O'Bryan, Corliss A.; Jarvis, Nathan; Clausen, Ed
2015-01-01
One of the most pressing issues for many land grant institutions is the ever increasing cost to build and operate wet chemistry laboratories. A partial solution is to develop computer-based teaching modules that take advantage of animation, web-based or off-campus learning experiences directed at engaging students' creative experiences. We…
ERIC Educational Resources Information Center
Ocker, Rosalie J.; Yaverbaum, Gayle J.
2004-01-01
Although collaborative learning techniques have been shown to enhance the learning experience, it is difficult to incorporate these concepts into courses without requiring students to collaborate outside of class. There is an ever increasing number of nontraditional university students who find it difficult to schedule the necessary meetings with…
The Evolving Roles of Language Teachers: Trained Coders, Local Researchers, Global Citizens
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2015-01-01
Language teachers are working in a world which has changed in the past decades in fundamentally disruptive ways, through profound changes in the role that networked computers play in everyday life and through the social and demographic shifts brought on by an increasingly globalized society, bringing together more than ever before people from…
Computer Networking in Japanese Education Today. AVE in Japan No. 27.
ERIC Educational Resources Information Center
Japan Audio-Visual Education Association, Tokyo.
The Ad Hoc Council on Educational Reform pointed out in 1987 that Japanese education must prepare for the ever-increasing information needs of the future, and in 1988, a proposal for the development of information networks was published by the Ministry of Education, Science, and Culture. This proposal recommended the utilization of a wide range of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, Brian; Jackson, R. Brian
2017-03-08
The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services.more » The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.« less
Which risk models perform best in selecting ever-smokers for lung cancer screening?
A new analysis by scientists at NCI evaluates nine different individualized lung cancer risk prediction models based on their selections of ever-smokers for computed tomography (CT) lung cancer screening.
Emergent properties of nuclei from ab initio coupled-cluster calculations
NASA Astrophysics Data System (ADS)
Hagen, G.; Hjorth-Jensen, M.; Jansen, G. R.; Papenbrock, T.
2016-06-01
Emergent properties such as nuclear saturation and deformation, and the effects on shell structure due to the proximity of the scattering continuum and particle decay channels are fascinating phenomena in atomic nuclei. In recent years, ab initio approaches to nuclei have taken the first steps towards tackling the computational challenge of describing these phenomena from Hamiltonians with microscopic degrees of freedom. This endeavor is now possible due to ideas from effective field theories, novel optimization strategies for nuclear interactions, ab initio methods exhibiting a soft scaling with mass number, and ever-increasing computational power. This paper reviews some of the recent accomplishments. We also present new results. The recently optimized chiral interaction NNLO{}{{sat}} is shown to provide an accurate description of both charge radii and binding energies in selected light- and medium-mass nuclei up to 56Ni. We derive an efficient scheme for including continuum effects in coupled-cluster computations of nuclei based on chiral nucleon-nucleon and three-nucleon forces, and present new results for unbound states in the neutron-rich isotopes of oxygen and calcium. The coupling to the continuum impacts the energies of the {J}π =1/{2}-,3/{2}-,7/{2}-,3/{2}+ states in {}{17,23,25}O, and—contrary to naive shell-model expectations—the level ordering of the {J}π =3/{2}+,5/{2}+,9/{2}+ states in {}{53,55,61}Ca. ).
Recent Trends in Robotics Research
NASA Astrophysics Data System (ADS)
Ejiri, Masakazu
My views on recent trends in the strategy and practice of Japan's robotics research are briefly introduced. To meet ever-increasing public expectations, robotics researchers and engineers have to be more seriously concerned about robots' intrinsic weaknesses. Examples of these are power-related and reliability issues. Resolving these issues will increase the feasibility of creating successful new industry, and the likelihood of robotics becoming a key technology for providing a safe and stress-free society in the future.
Computer-based analysis of microvascular alterations in a mouse model for Alzheimer's disease
NASA Astrophysics Data System (ADS)
Heinzer, Stefan; Müller, Ralph; Stampanoni, Marco; Abela, Rafael; Meyer, Eric P.; Ulmann-Schuler, Alexandra; Krucker, Thomas
2007-03-01
Vascular factors associated with Alzheimer's disease (AD) have recently gained increased attention. To investigate changes in vascular, particularly microvascular architecture, we developed a hierarchical imaging framework to obtain large-volume, high-resolution 3D images from brains of transgenic mice modeling AD. In this paper, we present imaging and data analysis methods which allow compiling unique characteristics from several hundred gigabytes of image data. Image acquisition is based on desktop micro-computed tomography (µCT) and local synchrotron-radiation µCT (SRµCT) scanning with a nominal voxel size of 16 µm and 1.4 µm, respectively. Two visualization approaches were implemented: stacks of Z-buffer projections for fast data browsing, and progressive-mesh based surface rendering for detailed 3D visualization of the large datasets. In a first step, image data was assessed visually via a Java client connected to a central database. Identified characteristics of interest were subsequently quantified using global morphometry software. To obtain even deeper insight into microvascular alterations, tree analysis software was developed providing local morphometric parameters such as number of vessel segments or vessel tortuosity. In the context of ever increasing image resolution and large datasets, computer-aided analysis has proven both powerful and indispensable. The hierarchical approach maintains the context of local phenomena, while proper visualization and morphometry provide the basis for detailed analysis of the pathology related to structure. Beyond analysis of microvascular changes in AD this framework will have significant impact considering that vascular changes are involved in other neurodegenerative diseases as well as in cancer, cardiovascular disease, asthma, and arthritis.
Composite materials molding simulation for purpose of automotive industry
NASA Astrophysics Data System (ADS)
Grabowski, Ł.; Baier, A.; Majzner, M.; Sobek, M.
2016-08-01
Composite materials loom large increasingly important role in the overall industry. Composite material have a special role in the ever-evolving automotive industry. Every year the composite materials are used in a growing number of elements included in the cars construction. Development requires the search for ever new applications of composite materials in areas where previously were used only metal materials. Requirements for modern solutions, such as reducing the weight of vehicles, the required strength and vibration damping characteristics go hand in hand with the properties of modern composite materials. The designers faced the challenge of the use of modern composite materials in the construction of bodies of power steering systems in vehicles. The initial choice of method for producing composite bodies was the method of molding injection of composite material. Molding injection of polymeric materials is a widely known and used for many years, but the molding injection of composite materials is a relatively new issue, innovative, it is not very common and is characterized by different conditions, parameters and properties in relation to the classical method. Therefore, for the purpose of selecting the appropriate composite material for injection for the body of power steering system computer analysis using Siemens NX 10.0 environment, including Moldex 3d and EasyFill Advanced tool to simulate the injection of materials from the group of possible solutions were carried out. Analyses were carried out on a model of a modernized wheel case of power steering system. During analysis, input parameters, such as temperature, pressure injectors, temperature charts have been analysed. An important part of the analysis was to analyse the propagation of material inside the mold during injection, so that allowed to determine the shape formability and the existence of possible imperfections of shapes and locations air traps. A very important parameter received from computer analysis was to determine the occurrence of the shrinkage of the material, which significantly affects the behaviour of the assumed geometry of the tested component. It also allowed the prediction of existence of shrincage of material during the process of modelling the shape of body. The next step was to analyse the numerical analysis results received from Siemens NX 10 and Moldex 3D EasyFlow Advanced environment. The process of injection were subjected to shape of prototype body of power steering. The material used in process of injection was similar to one of excepted material to be used in process of molding. Nextly, the results were analysed in purpose of geometry, where samples has aberrations in comparison to a given shape of mold. The samples were also analysed in terms of shrinkage. Research and results were described in detail in this paper.
So You Want a Meade LX Telescope!
NASA Astrophysics Data System (ADS)
Harris, Lawrence
Perhaps every generation of astronomers believes that their telescopes are the best that have ever been. They are surely all correct! The great leap of our time is that computer-designed and machined parts have led to more accurately made components that give the astronomer ever better views. The manual skills of the craftsman mirror grinder have been transformed into the new-age skills of the programmer and the machine maker. (The new products did not end the work of craftsman telescope makers, though. Many highly skilled amateur/professional opticians continued to produce good-quality mirrors that are still seen today.) Amateur-priced telescopes are now capable of highly accurate tracking and computer control that were once only the province of professionals. This has greatly increased the possibilities of serious astronomy projects for which tailor-made software has been developed. Add a CCD camera to these improved telescopes (see Chap. 3), and you bring a whole new dimension to your astronomy (see Fig. 1.1).
Advances in computer imaging/applications in facial plastic surgery.
Papel, I D; Jiannetto, D F
1999-01-01
Rapidly progressing computer technology, ever-increasing expectations of patients, and a confusing medicolegal environment requires a clarification of the role of computer imaging/applications. Advances in computer technology and its applications are reviewed. A brief historical discussion is included for perspective. Improvements in both hardware and software with the advent of digital imaging have allowed great increases in speed and accuracy in patient imaging. This facilitates doctor-patient communication and possibly realistic patient expectations. Patients seeking cosmetic surgery now often expect preoperative imaging. Although society in general has become more litigious, a literature search up to 1998 reveals no lawsuits directly involving computer imaging. It appears that conservative utilization of computer imaging by the facial plastic surgeon may actually reduce liability and promote communication. Recent advances have significantly enhanced the value of computer imaging in the practice of facial plastic surgery. These technological advances in computer imaging appear to contribute a useful technique for the practice of facial plastic surgery. Inclusion of computer imaging should be given serious consideration as an adjunct to clinical practice.
The Baghdad that Was: Using Primary Sources to Teach World History
ERIC Educational Resources Information Center
Schur, Joan Brodsky
2009-01-01
That primary source documents have the power to bring the past alive is no news to social studies teachers. What is new in the last 10 years is the number of digitized documents available online that teachers can download and use in their classrooms. Encouraging teachers to utilize this ever-increasing treasure trove of resources was the goal of…
ERIC Educational Resources Information Center
Mullin, Christopher M.
2016-01-01
Decades of research reinforce the power of postsecondary education to improve the lives of students and society. To this end, the establishment of an educated citizenry built to sustain and mold the principles governing an ever-dynamic America is increasingly a responsibility incumbent upon institutions of higher education. Just how to fund this…
Lost in Second Life: Virtual Embodiment and Language Learning via Multimodal Communication
ERIC Educational Resources Information Center
Pasfield-Neofitou, Sarah; Huang, Hui; Grant, Scott
2015-01-01
Increased recognition of the role of the body and environment in cognition has taken place in recent decades in the form of new theories of embodied and extended cognition. The growing use of ever more sophisticated computer-generated 3D virtual worlds and avatars has added a new dimension to these theories of cognition. Both developments provide…
With the advent of Earth-orbiting satellites to monitor our planet and spacecraft that study the sun, an active international joint project to monitor the Sun?Earth (Solar Terrestrial) environment has evolved. Coupled with an ever increasing computational capability, we are now a...
Improving Data Mobility & Management for International Cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borrill, Julian; Dart, Eli; Gore, Brooklin
In February 2015 the third workshop in the CrossConnects series, with a focus on Improving Data Mobility & Management for International Cosmology, was held at Lawrence Berkeley National Laboratory. Scientists from fields including astrophysics, cosmology, and astronomy collaborated with experts in computing and networking to outline strategic opportunities for enhancing scientific productivity and effectively managing the ever-increasing scale of scientific data.
ERIC Educational Resources Information Center
Huelskamp, Lisa M.
2009-01-01
The need for effective teachers is growing while national and state standards are putting ever-increasing demands on teachers and raising expectations for student achievement. Low science and mathematics standardized test scores, particularly in the middle grades, reflect unprepared adolescents, perhaps because of ineffective teaching strategies…
Geographically distributed real-time digital simulations using linear prediction
Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank; ...
2016-07-04
Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less
Geographically distributed real-time digital simulations using linear prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank
Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
easyGWAS: A Cloud-Based Platform for Comparing the Results of Genome-Wide Association Studies.
Grimm, Dominik G; Roqueiro, Damian; Salomé, Patrice A; Kleeberger, Stefan; Greshake, Bastian; Zhu, Wangsheng; Liu, Chang; Lippert, Christoph; Stegle, Oliver; Schölkopf, Bernhard; Weigel, Detlef; Borgwardt, Karsten M
2017-01-01
The ever-growing availability of high-quality genotypes for a multitude of species has enabled researchers to explore the underlying genetic architecture of complex phenotypes at an unprecedented level of detail using genome-wide association studies (GWAS). The systematic comparison of results obtained from GWAS of different traits opens up new possibilities, including the analysis of pleiotropic effects. Other advantages that result from the integration of multiple GWAS are the ability to replicate GWAS signals and to increase statistical power to detect such signals through meta-analyses. In order to facilitate the simple comparison of GWAS results, we present easyGWAS, a powerful, species-independent online resource for computing, storing, sharing, annotating, and comparing GWAS. The easyGWAS tool supports multiple species, the uploading of private genotype data and summary statistics of existing GWAS, as well as advanced methods for comparing GWAS results across different experiments and data sets in an interactive and user-friendly interface. easyGWAS is also a public data repository for GWAS data and summary statistics and already includes published data and results from several major GWAS. We demonstrate the potential of easyGWAS with a case study of the model organism Arabidopsis thaliana , using flowering and growth-related traits. © 2016 American Society of Plant Biologists. All rights reserved.
Recipes for free energy calculations in biomolecular systems.
Moradi, Mahmoud; Babin, Volodymyr; Sagui, Celeste; Roland, Christopher
2013-01-01
During the last decade, several methods for sampling phase space and calculating various free energies in biomolecular systems have been devised or refined for molecular dynamics (MD) simulations. Thus, state-of-the-art methodology and the ever increasing computer power allow calculations that were forbidden a decade ago. These calculations, however, are not trivial as they require knowledge of the methods, insight into the system under study, and, quite often, an artful combination of different methodologies in order to avoid the various traps inherent in an unknown free energy landscape. In this chapter, we illustrate some of these concepts with two relatively simple systems, a sugar ring and proline oligopeptides, whose free energy landscapes still offer considerable challenges. In order to explore the configurational space of these systems, and to surmount the various free energy barriers, we combine three complementary methods: a nonequilibrium umbrella sampling method (adaptively biased MD, or ABMD), replica-exchange molecular dynamics (REMD), and steered molecular dynamics (SMD). In particular, ABMD is used to compute the free energy surface of a set of collective variables; REMD is used to improve the performance of ABMD, to carry out sampling in space complementary to the collective variables, and to sample equilibrium configurations directly; and SMD is used to study different transition mechanisms.
Interpreting the strongest deep earthquake ever observed
NASA Astrophysics Data System (ADS)
Schultz, Colin
2013-12-01
Massive earthquakes that strike deep within the Earth may be more efficient at dissipating pent-up energy than similar quakes near the surface, according to new research by Wei et al. The authors analyzed the rupture of the most powerful deep earthquake ever recorded.
NASA Astrophysics Data System (ADS)
Chow, Sherman S. M.
Traceable signature scheme extends a group signature scheme with an enhanced anonymity management mechanism. The group manager can compute a tracing trapdoor which enables anyone to test if a signature is signed by a given misbehaving user, while the only way to do so for group signatures requires revealing the signer of all signatures. Nevertheless, it is not tracing in a strict sense. For all existing schemes, T tracing agents need to recollect all N' signatures ever produced and perform RN' “checks” for R revoked users. This involves a high volume of transfer and computations. Increasing T increases the degree of parallelism for tracing but also the probability of “missing” some signatures in case some of the agents are dishonest.
Inside The Space Launch System (SLS): Outfitting The World’s Most Powerful Rocket
2018-02-13
Find out why NASA’s new deep-space rocket, the Space Launch System (SLS) is more than just big and beautiful. For the world’s most powerful rocket, it takes a lot of “guts.” Engineers have built all the giant structures that will be assembled to form the first SLS rocket, and now they are busy installing and outfitting the rocket’s insides with sensors, cables and other equipment. The rocket’s insides including its incredible flight computers and batteries will ensure SLS can do the job of sending the Orion spacecraft out beyond the Moon farther than any human-rated space vehicle as ever ventured. Learn how the SLS core stage components are being outfitted for the first SLS mission, Exploration Mission-1. Find out more at https://www.nasa.gov/exploration/systems/sls/index.html
NLO renormalization in the Hamiltonian truncation
NASA Astrophysics Data System (ADS)
Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.
2017-09-01
Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.
Organization of the secure distributed computing based on multi-agent system
NASA Astrophysics Data System (ADS)
Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera
2018-04-01
Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.
1998-06-05
and obtaining control of the seas in and around the Italian Peninsula . Hamilcar took defeat bitterly and taught Hannibal, at an early age, to...a second Carthaginian empire in the rich, fertile Spanish climate which could challenge Rome’s ever-increasing power.2 As Hamilcar moved...consolidate and strengthen Carthaginian strongholds in Spain, then took the offensive battle to the Romans on their own ground—the Italian Peninsula
The Arctic Circle: A Ring of Influence
2010-05-03
that objective. 1 INTRODUCTION International awareness regarding the Arctic Circle continues to grow due to increasing polar ice melt, and the need... ice melt has created opportunities for Arctic countries to expand their territorial areas for access to more natural resources. Those resources...bringing fish up further north than ever seen before‖ states then Navy Commander Ray Chartier, National Ice Center Director, in his Sea Power interview
The experience of agency in human-computer interactions: a review
Limerick, Hannah; Coyle, David; Moore, James W.
2014-01-01
The sense of agency is the experience of controlling both one’s body and the external environment. Although the sense of agency has been studied extensively, there is a paucity of studies in applied “real-life” situations. One applied domain that seems highly relevant is human-computer-interaction (HCI), as an increasing number of our everyday agentive interactions involve technology. Indeed, HCI has long recognized the feeling of control as a key factor in how people experience interactions with technology. The aim of this review is to summarize and examine the possible links between sense of agency and understanding control in HCI. We explore the overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers. An overarching consideration is how agency research can inform HCI and vice versa. Finally, we discuss the potential ethical implications of personal responsibility in an ever-increasing society of technology users and intelligent machine interfaces. PMID:25191256
Collaborative interactive visualization: exploratory concept
NASA Astrophysics Data System (ADS)
Mokhtari, Marielle; Lavigne, Valérie; Drolet, Frédéric
2015-05-01
Dealing with an ever increasing amount of data is a challenge that military intelligence analysts or team of analysts face day to day. Increased individual and collective comprehension goes through collaboration between people. Better is the collaboration, better will be the comprehension. Nowadays, various technologies support and enhance collaboration by allowing people to connect and collaborate in settings as varied as across mobile devices, over networked computers, display walls, tabletop surfaces, to name just a few. A powerful collaboration system includes traditional and multimodal visualization features to achieve effective human communication. Interactive visualization strengthens collaboration because this approach is conducive to incrementally building a mental assessment of the data meaning. The purpose of this paper is to present an overview of the envisioned collaboration architecture and the interactive visualization concepts underlying the Sensemaking Support System prototype developed to support analysts in the context of the Joint Intelligence Collection and Analysis Capability project at DRDC Valcartier. It presents the current version of the architecture, discusses future capabilities to help analyst(s) in the accomplishment of their tasks and finally recommends collaboration and visualization technologies allowing to go a step further both as individual and as a team.
Increasingly mobile: How new technologies can enhance qualitative research
Moylan, Carrie Ann; Derr, Amelia Seraphia; Lindhorst, Taryn
2015-01-01
Advances in technology, such as the growth of smart phones, tablet computing, and improved access to the internet have resulted in many new tools and applications designed to increase efficiency and improve workflow. Some of these tools will assist scholars using qualitative methods with their research processes. We describe emerging technologies for use in data collection, analysis, and dissemination that each offer enhancements to existing research processes. Suggestions for keeping pace with the ever-evolving technological landscape are also offered. PMID:25798072
Closed-Cycle Hydrogen-Oxygen Regenerative Fuel Cell at the NASA Glenn Research Center-An Update
NASA Technical Reports Server (NTRS)
Bents, David J.; Chang, Bei-Jiann; Johnson, Donald W.; Garcia, Christopher P.
2008-01-01
The closed cycle hydrogen-oxygen proton exchange membrane (PEM) regenerative fuel cell (RFC) at the NASA Glenn Research Center has demonstrated multiple back-to-back contiguous cycles at rated power and round-trip efficiencies up to 52 percent. It is the first fully closed cycle RFC ever demonstrated. (The entire system is sealed; nothing enters or escapes the system other than electrical power and heat.) During fiscal year fiscal year (FY) FY06 to FY07, the system s numerous modifications and internal improvements focused on reducing parasitic power, heat loss, and noise signature; increasing its functionality as an unattended automated energy storage device; and in-service reliability.
Space-based solar power conversion and delivery systems study
NASA Technical Reports Server (NTRS)
1976-01-01
Even at reduced rates of growth, the demand for electric power is expected to more than triple between now and 1995, and to triple again over the period 1995-2020. Without the development of new power sources and advanced transmission technologies, it may not be possible to supply electric energy at prices that are conductive to generalized economic welfare. Solar power is renewable and its conversion and transmission from space may be advantageous. The goal of this study is to assess the economic merit of space-based photovoltaic systems for power generation and a power relay satellite for power transmission. In this study, satellite solar power generation and transmission systems, as represented by current configurations of the Satellite Solar Station (SSPS) and the Power Relay Satellite (PRS), are compared with current and future terrestrial power generation and transmission systems to determine their technical and economic suitability for meeting power demands in the period of 1990 and beyond while meeting ever-increasing environmental and social constraints.
Emergent properties of nuclei from ab initio coupled-cluster calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagen, G.; Hjorth-Jensen, M.; Jansen, G. R.
Emergent properties such as nuclear saturation and deformation, and the effects on shell structure due to the proximity of the scattering continuum and particle decay channels are fascinating phenomena in atomic nuclei. In recent years, ab initio approaches to nuclei have taken the first steps towards tackling the computational challenge of describing these phenomena from Hamiltonians with microscopic degrees of freedom. Our endeavor is now possible due to ideas from effective field theories, novel optimization strategies for nuclear interactions, ab initio methods exhibiting a soft scaling with mass number, and ever-increasing computational power. We review some of the recent accomplishments. We also present new results. The recently optimized chiral interaction NNLOmore » $${}_{{\\rm{sat}}}$$ is shown to provide an accurate description of both charge radii and binding energies in selected light- and medium-mass nuclei up to 56Ni. We derive an efficient scheme for including continuum effects in coupled-cluster computations of nuclei based on chiral nucleon–nucleon and three-nucleon forces, and present new results for unbound states in the neutron-rich isotopes of oxygen and calcium. Finally, the coupling to the continuum impacts the energies of the $${J}^{\\pi }=1/{2}^{-},3/{2}^{-},7/{2}^{-},3/{2}^{+}$$ states in $${}^{\\mathrm{17,23,25}}$$O, and—contrary to naive shell-model expectations—the level ordering of the $${J}^{\\pi }=3/{2}^{+},5/{2}^{+},9/{2}^{+}$$ states in $${}^{\\mathrm{53,55,61}}$$Ca.« less
Emergent properties of nuclei from ab initio coupled-cluster calculations
Hagen, G.; Hjorth-Jensen, M.; Jansen, G. R.; ...
2016-05-17
Emergent properties such as nuclear saturation and deformation, and the effects on shell structure due to the proximity of the scattering continuum and particle decay channels are fascinating phenomena in atomic nuclei. In recent years, ab initio approaches to nuclei have taken the first steps towards tackling the computational challenge of describing these phenomena from Hamiltonians with microscopic degrees of freedom. Our endeavor is now possible due to ideas from effective field theories, novel optimization strategies for nuclear interactions, ab initio methods exhibiting a soft scaling with mass number, and ever-increasing computational power. We review some of the recent accomplishments. We also present new results. The recently optimized chiral interaction NNLOmore » $${}_{{\\rm{sat}}}$$ is shown to provide an accurate description of both charge radii and binding energies in selected light- and medium-mass nuclei up to 56Ni. We derive an efficient scheme for including continuum effects in coupled-cluster computations of nuclei based on chiral nucleon–nucleon and three-nucleon forces, and present new results for unbound states in the neutron-rich isotopes of oxygen and calcium. Finally, the coupling to the continuum impacts the energies of the $${J}^{\\pi }=1/{2}^{-},3/{2}^{-},7/{2}^{-},3/{2}^{+}$$ states in $${}^{\\mathrm{17,23,25}}$$O, and—contrary to naive shell-model expectations—the level ordering of the $${J}^{\\pi }=3/{2}^{+},5/{2}^{+},9/{2}^{+}$$ states in $${}^{\\mathrm{53,55,61}}$$Ca.« less
Computation Directorate 2008 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2009-03-25
Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less
Manuel, John S
2003-01-01
Design innovations and government-sponsored financial incentives are making solar energy increasingly attractive to homeowners and institutional customers such as school districts. In particular, the passive solar design concept of daylighting is gaining favor among educators due to evidence of improved performance by students working in daylit classrooms. Electricity-generating photovoltaic systems are also becoming more popular, especially in states such as California that have high electric rates and frequent power shortages. To help spread the word about solar power, the U.S. Department of Energy staged its first-ever Solar Decathlon in October 2002. This event featured solar-savvy homes designed by 14 college teams. PMID:12573926
Active X based standards for healthcare integration.
Greenberg, D S; Welcker, B
1998-02-01
With cost pressures brought to the forefront by the growth of managed care, the integration of healthcare information systems is more important than ever. Providers of healthcare information are under increasing pressure to provide timely information to end users in a cost effective manner. Organizations have had to decide between the strong functionality that a multi-vendor 'best of breed' architecture provides and the strong integration provided by a single-vendor solution. As connectivity between systems increased, these interfaces were migrated to work across serial and eventually, network, connections. In addition, the content of the information became standardized through efforts like HL7 and ANSI X12 and Edifact. Although content-based standards go a long way towards facilitating interoperability, there is also quite a bit of work required to connect two systems even when they both adhere to the standard. A key to accomplishing this goal is increasing the connectivity between disparate systems in the healthcare environment. Microsoft is working with healthcare organizations and independent software vendors to bring Microsoft's powerful enterprise object technology, ActiveX, to the healthcare industry. Whilst object orientation has been heralded as the 'next big thing' in computer applications development, Microsoft believe that, in fact, component software is the technology which will provide the greatest benefit to end users.
Mechatronics: the future of mechanical engineering; past, present, and a vision for the future
NASA Astrophysics Data System (ADS)
Ramasubramanian, M. K.
2001-08-01
Mechatronics is the synergistic integration of precision mechanical engineering, electronics, computational hardware and software in the design of products and processes. Mechatronics, the term coined in Japan in the '70s, has evolved to symbolize what mechanical design engineers do today worldwide. The revolutionary introduction of the microprocessor (or microcontroller) in the early '80s and ever increasing performance-cost ratio has changed the paradigm of mechanical design forever, and has broadened the original definition of mechatronics to include intelligent control and autonomous decision-making. Today, increasing number of new products is being developed at the intersection between traditional disciplines of Engineering, and Computer and Material Sciences. New developments in these traditional disciplines are being absorbed into mechatronics design at an ever-increasing pace. In this paper, a brief history of mechatronics, and several examples of this rapid adaptation of technologies into product design is presented. With the ongoing information technology revolution, especially in wireless communication, smart sensors design (enabled by MEMS technology), and embedded systems engineering, mechatronics design is going through another step change in capabilities and scope. The implications of these developments in mechatronics design in the near future are discussed. Finally, deficiencies in our engineering curriculum to address the needs of the industry to cope up with these rapid changes, and proposed remedies, will also be discussed.
Computation Directorate Annual Report 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L; McGraw, J R; Ashby, S F
Big computers are icons: symbols of the culture, and of the larger computing infrastructure that exists at Lawrence Livermore. Through the collective effort of Laboratory personnel, they enable scientific discovery and engineering development on an unprecedented scale. For more than three decades, the Computation Directorate has supplied the big computers that enable the science necessary for Laboratory missions and programs. Livermore supercomputing is uniquely mission driven. The high-fidelity weapon simulation capabilities essential to the Stockpile Stewardship Program compel major advances in weapons codes and science, compute power, and computational infrastructure. Computation's activities align with this vital mission of the Departmentmore » of Energy. Increasingly, non-weapons Laboratory programs also rely on computer simulation. World-class achievements have been accomplished by LLNL specialists working in multi-disciplinary research and development teams. In these teams, Computation personnel employ a wide array of skills, from desktop support expertise, to complex applications development, to advanced research. Computation's skilled professionals make the Directorate the success that it has become. These individuals know the importance of the work they do and the many ways it contributes to Laboratory missions. They make appropriate and timely decisions that move the entire organization forward. They make Computation a leader in helping LLNL achieve its programmatic milestones. I dedicate this inaugural Annual Report to the people of Computation in recognition of their continuing contributions. I am proud that we perform our work securely and safely. Despite increased cyber attacks on our computing infrastructure from the Internet, advanced cyber security practices ensure that our computing environment remains secure. Through Integrated Safety Management (ISM) and diligent oversight, we address safety issues promptly and aggressively. The safety of our employees, whether at work or at home, is a paramount concern. Even as the Directorate meets today's supercomputing requirements, we are preparing for the future. We are investigating open-source cluster technology, the basis of our highly successful Mulitprogrammatic Capability Resource (MCR). Several breakthrough discoveries have resulted from MCR calculations coupled with theory and experiment, prompting Laboratory scientists to demand ever-greater capacity and capability. This demand is being met by a new 23-TF system, Thunder, with architecture modeled on MCR. In preparation for the ''after-next'' computer, we are researching technology even farther out on the horizon--cell-based computers. Assuming that the funding and the technology hold, we will acquire the cell-based machine BlueGene/L within the next 12 months.« less
Minimization search method for data inversion
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1975-01-01
Technique has been developed for determining values of selected subsets of independent variables in mathematical formulations. Required computation time increases with first power of the number of variables. This is in contrast with classical minimization methods for which computational time increases with third power of the number of variables.
Power enhancement of piezoelectric transformers by adding heat transfer equipment.
Su, Yu-Hao; Liu, Yuan-Ping; Vasic, Dejan; Wu, Wen-Jong; Costa, François; Lee, Chih-Kung
2012-10-01
It is known that piezoelectric transformers have several inherent advantages compared with conventional electromagnetic transformers. However, the maximum power capacity of piezoelectric transformers is not as large as electromagnetic transformers in practice, especially in the case of high output current. The theoretical power density of piezoelectric transformers calculated by stress boundary can reach 330 W/cm(3), but no piezoelectric transformer has ever reached such a high power density in practice. The power density of piezoelectric transformers is limited to 33 W/cm(3) in practical applications. The underlying reason is that the maximum passing current of the piezoelectric material (mechanical current) is limited by the temperature rise caused by heat generation. To increase this current and the power capacity, we proposed to add a thermal pad to the piezoelectric transformer to dissipate heat. The experimental results showed that the proposed techniques can increase by 3 times the output current of the piezoelectric transformer. A theoretical-phenomenological model which explains the relationship between vibration velocity and generated heat is also established to verify the experimental results.
Integrated Maintenance Information System (IMIS): A Maintenance Information Delivery Concept.
1987-11-01
InterFace Figure 2. Portable Maintenance Computer Concept. provide advice for difficult fault-isolation problems . The technician will be able to accomplish...faced with an ever-growing number of paper-based technical orders (TOs). This has greatly increased costs and distribution problems . In addition, it has...compounded problems associ- ated with ensuring accurate data and the lengthy correction times involved. To improve the accuracy of technical data and
Computational science: shifting the focus from tools to models
Hinsen, Konrad
2014-01-01
Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728
Software Support for Transiently Powered Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Der Woude, Joel Matthew
With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less
Improving the Computational Thinking Pedagogical Capabilities of School Teachers
ERIC Educational Resources Information Center
Bower, Matt; Wood, Leigh N.; Lai, Jennifer W. M.; Howe, Cathie; Lister, Raymond; Mason, Raina; Highfield, Kate; Veal, Jennifer
2017-01-01
The idea of computational thinking as skills and universal competence which every child should possess emerged last decade and has been gaining traction ever since. This raises a number of questions, including how to integrate computational thinking into the curriculum, whether teachers have computational thinking pedagogical capabilities to teach…
NASA Astrophysics Data System (ADS)
Scarella, Gilles; Clatz, Olivier; Lanteri, Stéphane; Beaume, Grégory; Oudot, Steve; Pons, Jean-Philippe; Piperno, Sergo; Joly, Patrick; Wiart, Joe
2006-06-01
The ever-rising diffusion of cellular phones has brought about an increased concern for the possible consequences of electromagnetic radiation on human health. Possible thermal effects have been investigated, via experimentation or simulation, by several research projects in the last decade. Concerning numerical modeling, the power absorption in a user's head is generally computed using discretized models built from clinical MRI data. The vast majority of such numerical studies have been conducted using Finite Differences Time Domain methods, although strong limitations of their accuracy are due to heterogeneity, poor definition of the detailed structures of head tissues (staircasing effects), etc. In order to propose numerical modeling using Finite Element or Discontinuous Galerkin Time Domain methods, reliable automated tools for the unstructured discretization of human heads are also needed. Results presented in this article aim at filling the gap between human head MRI images and the accurate numerical modeling of wave propagation in biological tissues and its thermal effects. To cite this article: G. Scarella et al., C. R. Physique 7 (2006).
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
Virtual Observatory and Distributed Data Mining
NASA Astrophysics Data System (ADS)
Borne, Kirk D.
2012-03-01
New modes of discovery are enabled by the growth of data and computational resources (i.e., cyberinfrastructure) in the sciences. This cyberinfrastructure includes structured databases, virtual observatories (distributed data, as described in Section 20.2.1 of this chapter), high-performance computing (petascale machines), distributed computing (e.g., the Grid, the Cloud, and peer-to-peer networks), intelligent search and discovery tools, and innovative visualization environments. Data streams from experiments, sensors, and simulations are increasingly complex and growing in volume. This is true in most sciences, including astronomy, climate simulations, Earth observing systems, remote sensing data collections, and sensor networks. At the same time, we see an emerging confluence of new technologies and approaches to science, most clearly visible in the growing synergism of the four modes of scientific discovery: sensors-modeling-computing-data (Eastman et al. 2005). This has been driven by numerous developments, including the information explosion, development of large-array sensors, acceleration in high-performance computing (HPC) power, advances in algorithms, and efficient modeling techniques. Among these, the most extreme is the growth in new data. Specifically, the acquisition of data in all scientific disciplines is rapidly accelerating and causing a data glut (Bell et al. 2007). It has been estimated that data volumes double every year—for example, the NCSA (National Center for Supercomputing Applications) reported that their users cumulatively generated one petabyte of data over the first 19 years of NCSA operation, but they then generated their next one petabyte in the next year alone, and the data production has been growing by almost 100% each year after that (Butler 2008). The NCSA example is just one of many demonstrations of the exponential (annual data-doubling) growth in scientific data collections. In general, this putative data-doubling is an inevitable result of several compounding factors: the proliferation of data-generating devices, sensors, projects, and enterprises; the 18-month doubling of the digital capacity of these microprocessor-based sensors and devices (commonly referred to as "Moore’s law"); the move to digital for nearly all forms of information; the increase in human-generated data (both unstructured information on the web and structured data from experiments, models, and simulation); and the ever-expanding capability of higher density media to hold greater volumes of data (i.e., data production expands to fill the available storage space). These factors are consequently producing an exponential data growth rate, which will soon (if not already) become an insurmountable technical challenge even with the great advances in computation and algorithms. This technical challenge is compounded by the ever-increasing geographic dispersion of important data sources—the data collections are not stored uniformly at a single location, or with a single data model, or in uniform formats and modalities (e.g., images, databases, structured and unstructured files, and XML data sets)—the data are in fact large, distributed, heterogeneous, and complex. The greatest scientific research challenge with these massive distributed data collections is consequently extracting all of the rich information and knowledge content contained therein, thus requiring new approaches to scientific research. This emerging data-intensive and data-oriented approach to scientific research is sometimes called discovery informatics or X-informatics (where X can be any science, such as bio, geo, astro, chem, eco, or anything; Agresti 2003; Gray 2003; Borne 2010). This data-oriented approach to science is now recognized by some (e.g., Mahootian and Eastman 2009; Hey et al. 2009) as the fourth paradigm of research, following (historically) experiment/observation, modeling/analysis, and computational science.
GIER: A Danish computer from 1961 with a role in the modern revolution of astronomy - II
NASA Astrophysics Data System (ADS)
Høg, Erik
2018-04-01
A Danish computer, GIER, from 1961 played a vital role in the development of a new method for astrometric measurement. This method, photon counting astrometry, ultimately led to two satellites with a significant role in the modern revolution of astronomy. A GIER was installed at the Hamburg Observatory in 1964 where it was used to implement the entirely new method for the measurement of stellar positions by means of a meridian circle, at that time the fundamental instrument of astrometry. An expedition to Perth in Western Australia with the instrument and the computer was a success. This method was also implemented in space in the first ever astrometric satellite Hipparcos launched by ESA in 1989. The Hipparcos results published in 1997 revolutionized astrometry with an impact in all branches of astronomy from the solar system and stellar structure to cosmic distances and the dynamics of the Milky Way. In turn, the results paved the way for a successor, the one million times more powerful Gaia astrometry satellite launched by ESA in 2013. Preparations for a Gaia successor in twenty years are making progress.
Report to the Institutional Computing Executive Group (ICEG) August 14, 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carnes, B
We have delayed this report from its normal distribution schedule for two reasons. First, due to the coverage provided in the White Paper on Institutional Capability Computing Requirements distributed in August 2005, we felt a separate 2005 ICEG report would not be value added. Second, we wished to provide some specific information about the Peloton procurement and we have just now reached a point in the process where we can make some definitive statements. The Peloton procurement will result in an almost complete replacement of current M&IC systems. We have plans to retire MCR, iLX, and GPS. We will replacemore » them with new parallel and serial capacity systems based on the same node architecture in the new Peloton capability system named ATLAS. We are currently adding the first users to the Green Data Oasis, a large file system on the open network that will provide the institution with external collaboration data sharing. Only Thunder will remain from the current M&IC system list and it will be converted from Capability to Capacity. We are confident that we are entering a challenging yet rewarding new phase for the M&IC program. Institutional computing has been an essential component of our S&T investment strategy and has helped us achieve recognition in many scientific and technical forums. Through consistent institutional investments, M&IC has grown into a powerful unclassified computing resource that is being used across the Lab to push the limits of computing and its application to simulation science. With the addition of Peloton, the Laboratory will significantly increase the broad-based computing resources available to meet the ever-increasing demand for the large scale simulations indispensable to advancing all scientific disciplines. All Lab research efforts are bolstered through the long term development of mission driven scalable applications and platforms. The new systems will soon be fully utilized and will position Livermore to extend the outstanding science and technology breakthroughs the M&IC program has enabled to date.« less
NASA Astrophysics Data System (ADS)
Hoepfer, Matthias
Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
Kudi: A free open-source python library for the analysis of properties along reaction paths.
Vogt-Geisse, Stefan
2016-05-01
With increasing computational capabilities, an ever growing amount of data is generated in computational chemistry that contains a vast amount of chemically relevant information. It is therefore imperative to create new computational tools in order to process and extract this data in a sensible way. Kudi is an open source library that aids in the extraction of chemical properties from reaction paths. The straightforward structure of Kudi makes it easy to use for users and allows for effortless implementation of new capabilities, and extension to any quantum chemistry package. A use case for Kudi is shown for the tautomerization reaction of formic acid. Kudi is available free of charge at www.github.com/stvogt/kudi.
Non Volatile Flash Memory Radiation Tests
NASA Technical Reports Server (NTRS)
Irom, Farokh; Nguyen, Duc N.; Allen, Greg
2012-01-01
Commercial flash memory industry has experienced a fast growth in the recent years, because of their wide spread usage in cell phones, mp3 players and digital cameras. On the other hand, there has been increased interest in the use of high density commercial nonvolatile flash memories in space because of ever increasing data requirements and strict power requirements. Because of flash memories complex structure; they cannot be treated as just simple memories in regards to testing and analysis. It becomes quite challenging to determine how they will respond in radiation environments.
Security model for VM in cloud
NASA Astrophysics Data System (ADS)
Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.
2013-03-01
Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.
Making power visible: Doing theatre-based status work with nursing students.
Taylor, Steven S; Taylor, Rosemary A
2017-09-01
As part of a senior leadership class in an undergraduate baccalaureate nursing program in the northeastern United States, we conducted an experiential, theater-based workshop designed to increase student awareness of the micro-dynamics of power and the enactment of status in their day-to-day lives. These exercises allowed student participants to embody status and power and understand it in ways that they did not after simply completing assigned readings. At the conclusion of the workshop the participants were asked to reflect on their status habits and the consequences of these habits in a single hand-written page. The participants' reflections showed two interesting trends. The first is that a relatively short workshop dramatically increased participants' awareness of power and status as ever present, including a substantial normative move from seeing using power as being a generally bad thing that can be justified in the interests of the organization's mission to a more neutral stance that power and status are at work in all of our interactions. The second trend that emerged was the tendency for participants to focus on agency-based explanations of power dynamics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Moving Beyond Pretense: Nuclear Power and Nonproliferation
2014-06-01
and Hezbollah, might be at a heightened risk of transfer- ring nuclear weapons to terrorists. Moreover, even if no state would ever intentionally...programs—basically IAEA inspections and ex- port controls—we seem to be slipping into reliance on greatly increased national intelligence operations, both...generators, turbine , piping, and other large equipment needed for the system. • By 2008: A total of 82 megatons of fuel enriched up to 3.62 percent U
Natural Gas as an Instrument of Russian State Power (Letort Paper)
2011-10-01
effects the price increase had on the Ukrainian industry as a whole; and does not account for the costs to the Ukrainian economy of the 2006 and...a political consideration); and the differential cost of sanctions among the senders (largely an economic factor). First, allies hardly ever agree...sanctions is extremely hard to realize. 5 Second, the cost of imposing sanctions is sim- ply greater for some sender states than the potential
The Experimental Mathematician: The Pleasure of Discovery and the Role of Proof
ERIC Educational Resources Information Center
Borwein, Jonathan M.
2005-01-01
The emergence of powerful mathematical computing environments, the growing availability of correspondingly powerful (multi-processor) computers and the pervasive presence of the Internet allow for mathematicians, students and teachers, to proceed heuristically and "quasi-inductively." We may increasingly use symbolic and numeric computation,…
[CAT system and its application in training for manned space flight].
Zhu, X Q; Chen, D M
2000-02-01
As aerospace missions get increasingly frequent and complex, training becomes ever more critical. Training devices in all levels are demanded. Computer-Aided Training (CAT) system, because its economic, efficient and flexible, is attracting more and more attention. In this paper, the basic factors of CAT system were discussed; the applications of CAT system in training for manned space flight were illustrated. Then we prospected further developments of CAT system.
Materials Genome Initiative Element
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
NASA is committed to developing new materials and manufacturing methods that can enable new missions with ever increasing mission demands. Typically, the development and certification of new materials and manufacturing methods in the aerospace industry has required more than 20 years of development time with a costly testing and certification program. To reduce the cost and time to mature these emerging technologies, NASA is developing computational materials tools to improve understanding of the material and guide the certification process.
Lightweight fuzzy processes in clinical computing.
Hurdle, J F
1997-09-01
In spite of advances in computing hardware, many hospitals still have a hard time finding extra capacity in their production clinical information system to run artificial intelligence (AI) modules, for example: to support real-time drug-drug or drug-lab interactions; to track infection trends; to monitor compliance with case specific clinical guidelines; or to monitor/ control biomedical devices like an intelligent ventilator. Historically, adding AI functionality was not a major design concern when a typical clinical system is originally specified. AI technology is usually retrofitted 'on top of the old system' or 'run off line' in tandem with the old system to ensure that the routine work load would still get done (with as little impact from the AI side as possible). To compound the burden on system performance, most institutions have witnessed a long and increasing trend for intramural and extramural reporting, (e.g. the collection of data for a quality-control report in microbiology, or a meta-analysis of a suite of coronary artery bypass grafts techniques, etc.) and these place an ever-growing burden on typical the computer system's performance. We discuss a promising approach to adding extra AI processing power to a heavily-used system based on the notion 'lightweight fuzzy processing (LFP)', that is, fuzzy modules designed from the outset to impose a small computational load. A formal model for a useful subclass of fuzzy systems is defined below and is used as a framework for the automated generation of LFPs. By seeking to reduce the arithmetic complexity of the model (a hand-crafted process) and the data complexity of the model (an automated process), we show how LFPs can be generated for three sample datasets of clinical relevance.
SOI technology for power management in automotive and industrial applications
NASA Astrophysics Data System (ADS)
Stork, Johannes M. C.; Hosey, George P.
2017-02-01
Semiconductor on Insulator (SOI) technology offers an assortment of opportunities for chip manufacturers in the Power Management market. Recent advances in the automotive and industrial markets, along with emerging features, the increasing use of sensors, and the ever-expanding "Internet of Things" (IoT) are providing for continued growth in these markets while also driving more complex solutions. The potential benefits of SOI include the ability to place both high-voltage and low-voltage devices on a single chip, saving space and cost, simplifying designs and models, and improving performance, thereby cutting development costs and improving time to market. SOI also offers novel new approaches to long-standing technologies.
High-speed Si/GeSi hetero-structure Electro Absorption Modulator.
Mastronardi, L; Banakar, M; Khokhar, A Z; Hattasan, N; Rutirawut, T; Bucio, T Domínguez; Grabska, K M; Littlejohns, C; Bazin, A; Mashanovich, G; Gardes, F Y
2018-03-19
The ever-increasing demand for integrated, low power interconnect systems is pushing the bandwidth density of CMOS photonic devices. Taking advantage of the strong Franz-Keldysh effect in the C and L communication bands, electro-absorption modulators in Ge and GeSi are setting a new standard in terms of device footprint and power consumption for next generation photonics interconnect arrays. In this paper, we present a compact, low power electro-absorption modulator (EAM) Si/GeSi hetero-structure based on an 800 nm SOI overlayer with a modulation bandwidth of 56 GHz. The device design and fabrication tolerant process are presented, followed by the measurement analysis. Eye diagram measurements show a dynamic ER of 5.2 dB at a data rate of 56 Gb/s at 1566 nm, and calculated modulator power is 44 fJ/bit.
MacDoctor: The Macintosh diagnoser
NASA Technical Reports Server (NTRS)
Lavery, David B.; Brooks, William D.
1990-01-01
When the Macintosh computer was first released, the primary user was a computer hobbyist who typically had a significant technical background and was highly motivated to understand the internal structure and operational intricacies of the computer. In recent years the Macintosh computer has become a widely-accepted general purpose computer which is being used by an ever-increasing non-technical audience. This has lead to a large base of users which has neither the interest nor the background to understand what is happening 'behind the scenes' when the Macintosh is put to use - or what should be happening when something goes wrong. Additionally, the Macintosh itself has evolved from a simple closed design to a complete family of processor platforms and peripherals with a tremendous number of possible configurations. With the increasing popularity of the Macintosh series, software and hardware developers are producing a product for every user's need. As the complexity of configuration possibilities grows, the need for experienced or even expert knowledge is required to diagnose problems. This presents a problem to uneducated or casual users. This problem indicates a new Macintosh consumer need; that is, a diagnostic tool able to determine the problem for the user. As the volume of Macintosh products has increased, this need has also increased.
Space shuttle main engine controller
NASA Technical Reports Server (NTRS)
Mattox, R. M.; White, J. B.
1981-01-01
A technical description of the space shuttle main engine controller, which provides engine checkout prior to launch, engine control and monitoring during launch, and engine safety and monitoring in orbit, is presented. Each of the major controller subassemblies, the central processing unit, the computer interface electronics, the input electronics, the output electronics, and the power supplies are described and discussed in detail along with engine and orbiter interfaces and operational requirements. The controller represents a unique application of digital concepts, techniques, and technology in monitoring, managing, and controlling a high performance rocket engine propulsion system. The operational requirements placed on the controller, the extremely harsh operating environment to which it is exposed, and the reliability demanded, result in the most complex and rugged digital system ever designed, fabricated, and flown.
Early repositioning through compound set enrichment analysis: a knowledge-recycling strategy.
Temesi, Gergely; Bolgár, Bence; Arany, Adám; Szalai, Csaba; Antal, Péter; Mátyus, Péter
2014-04-01
Despite famous serendipitous drug repositioning success stories, systematic projects have not yet delivered the expected results. However, repositioning technologies are gaining ground in different phases of routine drug development, together with new adaptive strategies. We demonstrate the power of the compound information pool, the ever-growing heterogeneous information repertoire of approved drugs and candidates as an invaluable catalyzer in this transition. Systematic, computational utilization of this information pool for candidates in early phases is an open research problem; we propose a novel application of the enrichment analysis statistical framework for fusion of this information pool, specifically for the prediction of indications. Pharmaceutical consequences are formulated for a systematic and continuous knowledge recycling strategy, utilizing this information pool throughout the drug-discovery pipeline.
NREL Teams with Southern California Gas to Launch First Power-to-Gas
demonstration projects to create and test a carbon-free, power-to-gas system for the first time ever in the as solar and wind power, to make carbon-free hydrogen gas by breaking down water into hydrogen and
NASA Astrophysics Data System (ADS)
Cork, Chris; Lugg, Robert; Chacko, Manoj; Levi, Shimon
2005-06-01
With the exponential increase in output database size due to the aggressive optical proximity correction (OPC) and resolution enhancement technique (RET) required for deep sub-wavelength process nodes, the CPU time required for mask tape-out continues to increase significantly. For integrated device manufacturers (IDMs), this can impact the time-to-market for their products where even a few days delay could have a huge commercial impact and loss of market window opportunity. For foundries, a shorter turnaround time provides a competitive advantage in their demanding market, too slow could mean customers looking elsewhere for these services; while a fast turnaround may even command a higher price. With FAB turnaround of a mature, plain-vanilla CMOS process of around 20-30 days, a delay of several days in mask tapeout would contribute a significant fraction to the total time to deliver prototypes. Unlike silicon processing, masks tape-out time can be decreased by simply purchasing extra computing resources and software licenses. Mask tape-out groups are taking advantage of the ever-decreasing hardware cost and increasing power of commodity processors. The significant distributability inherent in some commercial Mask Synthesis software can be leveraged to address this critical business issue. Different implementations have different fractions of the code that cannot be parallelized and this affects the efficiency with which it scales, as is described by Amdahl"s law. Very few are efficient enough to allow the effective use of 1000"s of processors, enabling run times to drop from days to only minutes. What follows is a cost aware methodology to quantify the scalability of this class of software, and thus act as a guide to estimating the optimal investment in terms of hardware and software licenses.
Climate Modeling with a Million CPUs
NASA Astrophysics Data System (ADS)
Tobis, M.; Jackson, C. S.
2010-12-01
Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.
The PAW/GIPAW approach for computing NMR parameters: a new dimension added to NMR study of solids.
Charpentier, Thibault
2011-07-01
In 2001, Mauri and Pickard introduced the gauge including projected augmented wave (GIPAW) method that enabled for the first time the calculation of all-electron NMR parameters in solids, i.e. accounting for periodic boundary conditions. The GIPAW method roots in the plane wave pseudopotential formalism of the density functional theory (DFT), and avoids the use of the cluster approximation. This method has undoubtedly revitalized the interest in quantum chemical calculations in the solid-state NMR community. It has quickly evolved and improved so that the calculation of the key components of NMR interactions, namely the shielding and electric field gradient tensors, has now become a routine for most of the common nuclei studied in NMR. Availability of reliable implementations in several software packages (CASTEP, Quantum Espresso, PARATEC) make its usage more and more increasingly popular, maybe indispensable in near future for all material NMR studies. The majority of nuclei of the periodic table have already been investigated by GIPAW, and because of its high accuracy it is quickly becoming an essential tool for interpreting and understanding experimental NMR spectra, providing reliable assignments of the observed resonances to crystallographic sites or enabling a priori prediction of NMR data. The continuous increase of computing power makes ever larger (and thus more realistic) systems amenable to first-principles analysis. In the near future perspectives, as the incorporation of dynamical effects and/or disorder are still at their early developments, these areas will certainly be the prime target. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko
To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.
Bärnreuther, Peter; Czakon, Michał; Mitov, Alexander
2012-09-28
We compute the next-to-next-to-leading order QCD corrections to the partonic reaction that dominates top-pair production at the Tevatron. This is the first ever next-to-next-to-leading order calculation of an observable with more than two colored partons and/or massive fermions at hadron colliders. Augmenting our fixed order calculation with soft-gluon resummation through next-to-next-to-leading logarithmic accuracy, we observe that the predicted total inclusive cross section exhibits a very small perturbative uncertainty, estimated at ±2.7%. We expect that once all subdominant partonic reactions are accounted for, and work in this direction is ongoing, the perturbative theoretical uncertainty for this observable could drop below ±2%. Our calculation demonstrates the power of our computational approach and proves it can be successfully applied to all processes at hadron colliders for which high-precision analyses are needed.
NASA Astrophysics Data System (ADS)
Bärnreuther, Peter; Czakon, Michał; Mitov, Alexander
2012-09-01
We compute the next-to-next-to-leading order QCD corrections to the partonic reaction that dominates top-pair production at the Tevatron. This is the first ever next-to-next-to-leading order calculation of an observable with more than two colored partons and/or massive fermions at hadron colliders. Augmenting our fixed order calculation with soft-gluon resummation through next-to-next-to-leading logarithmic accuracy, we observe that the predicted total inclusive cross section exhibits a very small perturbative uncertainty, estimated at ±2.7%. We expect that once all subdominant partonic reactions are accounted for, and work in this direction is ongoing, the perturbative theoretical uncertainty for this observable could drop below ±2%. Our calculation demonstrates the power of our computational approach and proves it can be successfully applied to all processes at hadron colliders for which high-precision analyses are needed.
Reshaping Computer Literacy Teaching in Higher Education: Identification of Critical Success Factors
ERIC Educational Resources Information Center
Taylor, Estelle; Goede, Roelien; Steyn, Tjaart
2011-01-01
Purpose: Acquiring computer skills is more important today than ever before, especially in a developing country. Teaching of computer skills, however, has to adapt to new technology. This paper aims to model factors influencing the success of the learning of computer literacy by means of an e-learning environment. The research question for this…
The mass of massive rover software
NASA Technical Reports Server (NTRS)
Miller, David P.
1993-01-01
A planetary rover, like a spacecraft, must be fully self contained. Once launched, a rover can only receive information from its designers, and if solar powered, power from the Sun. As the distance from Earth increases, and the demands for power on the rover increase, there is a serious tradeoff between communication and computation. Both of these subsystems are very power hungry, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover. This situation and software techniques that can be used to reduce the requirements on both communication and computation, allowing the overall robot mass to be greatly reduced, are discussed.
The next generation of neural network chips
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.
There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and --more » with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.« less
Can NHS politics, power and conflict ever be a good thing for nurses?
Lees, Carolyn
2016-07-14
This article explores how organisational politics, power and conflict have a positive role to play for nurses in NHS organisational change and improvement, rather than always leading to disagreement and dispute.
Creating a Parallel Version of VisIt for Microsoft Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlock, B J; Biagas, K S; Rawson, P L
2011-12-07
VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
NASA Astrophysics Data System (ADS)
Bedkihal, Salil; Vaccaro, Joan; Barnett, S. M.
Aberg has claimed in a recent Letter,, that the coherence of a reservoir can be used repeatedly to perform coherent operations without ever diminishing in power to do so. The claim has particular relevance for quantum thermodynamics because, as shown in, latent energy that is locked by coherence may be extractable without incurring any additional cost. We show here, however, that repeated use of the reservoir gives an overall coherent operation of diminished accuracy and is necessarily accompanied by an increased thermodynamic cost.
ERIC Educational Resources Information Center
Drossel, Kerstin; Eickelmann, Birgit
2017-01-01
The increasing availability of new technologies in an ever more digitalized world has gained momentum in practically all spheres of life, making technology-related skills a key competence not only in professional settings. Thus, schools assume responsibility for imparting these skills to their students, and hence to future generations of…
Lackey, Amanda E; Pandey, Tarun; Moshiri, Mariam; Lalwani, Neeraj; Lall, Chandana; Bhargava, Puneet
2014-06-01
It is an opportune time for radiologists to focus on personal productivity. The ever increasing reliance on computers and the Internet has significantly changed the way we work. Myriad software applications are available to help us improve our personal efficiency. In this article, the authors discuss some tools that help improve collaboration and personal productivity, maximize e-learning, and protect valuable digital data. Published by Elsevier Inc.
Space-to-Space Power Beaming Enabling High Performance Rapid Geocentric Orbit Transfer
NASA Technical Reports Server (NTRS)
Dankanich, John W.; Vassallo, Corinne; Tadge, Megan
2015-01-01
The use of electric propulsion is more prevalent than ever, with industry pursuing all electric orbit transfers. Electric propulsion provides high mass utilization through efficient propellant transfer. However, the transfer times become detrimental as the delta V transitions from near-impulsive to low-thrust. Increasing power and therefore thrust has diminishing returns as the increasing mass of the power system limits the potential acceleration of the spacecraft. By using space-to-space power beaming, the power system can be decoupled from the spacecraft and allow significantly higher spacecraft alpha (W/kg) and therefore enable significantly higher accelerations while maintaining high performance. This project assesses the efficacy of space-to-space power beaming to enable rapid orbit transfer while maintaining high mass utilization. Concept assessment requires integrated techniques for low-thrust orbit transfer steering laws, efficient large-scale rectenna systems, and satellite constellation configuration optimization. This project includes the development of an integrated tool with implementation of IPOPT, Q-Law, and power-beaming models. The results highlight the viability of the concept, limits and paths to infusion, and comparison to state-of-the-art capabilities. The results indicate the viability of power beaming for what may be the only approach for achieving the desired transit times with high specific impulse.
Recent Efforts and Experiments in the Construction of Aviation Engines
NASA Technical Reports Server (NTRS)
SCHWAGER
1920-01-01
It became evident during World War I that ever-increasing demands were being placed on the mean power of aircraft engines as a result of the increased on board equipment and the demands of aerial combat. The need was for increased climbing efficiency and climbing speed. The response to these demands has been in terms of lightweight construction and the adaptation of the aircraft engine to the requirements of its use. Discussed here are specific efforts to increase flying efficiency, such as reduction of the number of revolutions of the propeller from 1400 to about 900 r.p.m. through the use of a reduction gear, increasing piston velocity, locating two crankshafts in one gear box, and using the two-cycle stroke. Also discussed are improvements in the transformation of fuel energy into engine power, the raising of compression ratios, the use of super-compression with carburetors constructed for high altitudes, the use of turbo-compressors, rotary engines, and the use of variable pitch propellers.
Leadership is associated with lower levels of stress
Sherman, Gary D.; Lee, Jooa J.; Cuddy, Amy J. C.; Renshon, Jonathan; Oveis, Christopher; Gross, James J.; Lerner, Jennifer S.
2012-01-01
As leaders ascend to more powerful positions in their groups, they face ever-increasing demands. As a result, there is a common perception that leaders have higher stress levels than nonleaders. However, if leaders also experience a heightened sense of control—a psychological factor known to have powerful stress-buffering effects—leadership should be associated with reduced stress levels. Using unique samples of real leaders, including military officers and government officials, we found that, compared with nonleaders, leaders had lower levels of the stress hormone cortisol and lower reports of anxiety (study 1). In study 2, leaders holding more powerful positions exhibited lower cortisol levels and less anxiety than leaders holding less powerful positions, a relationship explained significantly by their greater sense of control. Altogether, these findings reveal a clear relationship between leadership and stress, with leadership level being inversely related to stress. PMID:23012416
NASA Astrophysics Data System (ADS)
Siegel, Edward
2008-03-01
Classic statistics digits Newcomb[Am.J.Math.4,39,1881]-Weyl[Goett.Nachr.1912]-Benford[Proc.Am.Phil.Soc.78,4,51,1938]("NeWBe")probability ON-AVERAGE/MEAN log-law: =log[1+1/d]=log[(d+1)/d][google:``Benford's-Law'';"FUZZYICS": Siegel[AMS Nat.-Mtg.:2002&2008)]; Raimi[Sci.Am.221,109,1969]; Hill[Proc.AMS,123,3,887,1996]=log-base=units=SCALE-INVARIANCE!. Algebraic-inverse d=1/[ê(w)-1]: BOSONS(1924)=DIGITS(<1881): Energy-levels:ground=(d=0),first-(d=1)-excited ,... No fractions; only digit-integer-differences=quanta! Quo vadis digit
=oo vs.
<<
Excessive computer game playing: evidence for addiction and aggression?
Grüsser, S M; Thalemann, R; Griffiths, M D
2007-04-01
Computer games have become an ever-increasing part of many adolescents' day-to-day lives. Coupled with this phenomenon, reports of excessive gaming (computer game playing) denominated as "computer/video game addiction" have been discussed in the popular press as well as in recent scientific research. The aim of the present study was the investigation of the addictive potential of gaming as well as the relationship between excessive gaming and aggressive attitudes and behavior. A sample comprising of 7069 gamers answered two questionnaires online. Data revealed that 11.9% of participants (840 gamers) fulfilled diagnostic criteria of addiction concerning their gaming behavior, while there is only weak evidence for the assumption that aggressive behavior is interrelated with excessive gaming in general. Results of this study contribute to the assumption that also playing games without monetary reward meets criteria of addiction. Hence, an addictive potential of gaming should be taken into consideration regarding prevention and intervention.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
The path towards sustainable energy
NASA Astrophysics Data System (ADS)
Chu, Steven; Cui, Yi; Liu, Nian
2017-01-01
Civilization continues to be transformed by our ability to harness energy beyond human and animal power. A series of industrial and agricultural revolutions have allowed an increasing fraction of the world population to heat and light their homes, fertilize and irrigate their crops, connect to one another and travel around the world. All of this progress is fuelled by our ability to find, extract and use energy with ever increasing dexterity. Research in materials science is contributing to progress towards a sustainable future based on clean energy generation, transmission and distribution, the storage of electrical and chemical energy, energy efficiency, and better energy management systems.
The path towards sustainable energy.
Chu, Steven; Cui, Yi; Liu, Nian
2016-12-20
Civilization continues to be transformed by our ability to harness energy beyond human and animal power. A series of industrial and agricultural revolutions have allowed an increasing fraction of the world population to heat and light their homes, fertilize and irrigate their crops, connect to one another and travel around the world. All of this progress is fuelled by our ability to find, extract and use energy with ever increasing dexterity. Research in materials science is contributing to progress towards a sustainable future based on clean energy generation, transmission and distribution, the storage of electrical and chemical energy, energy efficiency, and better energy management systems.
Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...
Cunningham, Colin; Russell, Adrian
2012-08-28
Since the dawn of civilization, the human race has pushed technology to the limit to study the heavens in ever-increasing detail. As astronomical instruments have evolved from those built by Tycho Brahe in the sixteenth century, through Galileo and Newton in the seventeenth, to the present day, astronomers have made ever more precise measurements. To do this, they have pushed the art and science of precision engineering to extremes. Some of the critical steps are described in the evolution of precision engineering from the first telescopes to the modern generation telescopes and ultra-sensitive instruments that need a combination of precision manufacturing, metrology and accurate positioning systems. In the future, precision-engineered technologies such as those emerging from the photonics industries may enable future progress in enhancing the capabilities of instruments, while potentially reducing the size and cost. In the modern era, there has been a revolution in astronomy leading to ever-increasing light-gathering capability. Today, the European Southern Observatory (ESO) is at the forefront of this revolution, building observatories on the ground that are set to transform our view of the universe. At an elevation of 5000 m in the Atacama Desert of northern Chile, the Atacama Large Millimetre/submillimetre Array (ALMA) is nearing completion. The ALMA is the most powerful radio observatory ever and is being built by a global partnership from Europe, North America and East Asia. In the optical/infrared part of the spectrum, the latest project for ESO is even more ambitious: the European Extremely Large Telescope, a giant 40 m class telescope that will also be located in Chile and which will give the most detailed view of the universe so far.
The "Magic" of Wireless Access in the Library
ERIC Educational Resources Information Center
Balas, Janet L.
2006-01-01
It seems that the demand for public access computers grows exponentially every time a library network is expanded, making it impossible to ever have enough computers available for patrons. One solution that many libraries are implementing to ease the demand for public computer use is to offer wireless technology that allows patrons to bring in…
Isolated Operation at Hachinohe Micro-Grid Project
NASA Astrophysics Data System (ADS)
Takano, Tomihiro; Kojima, Yasuhiro; Temma, Koji; Simomura, Masaru
To meet the global warming, renewable energy sources like wind, solar and biomass generations are dramatically increasing. Cogeneration systems are also ever-growing to save consumers' energy costs among factories, buildings and homes where lots of thermal loads are expected. According to these dispersed generators growth, their negative impacts to commercial power systems quality become non-negligible, because their unstable output causes network voltage and frequency fluctuation. Micro-grid technology comes to the front to solve the problem and many demonstrative field tests are now going all over the world. This paper presents the control paradigm and its application to Hachinohe micro-gird project, especially focusing on the power quality at isolated operation on which strict condition is imposed.
Tracking Cloud Motion and Deformation for Short-Term Photovoltaic Power Forecasting
NASA Astrophysics Data System (ADS)
Good, Garrett; Siefert, Malte; Fritz, Rafael; Saint-Drenan, Yves-Marie; Dobschinski, Jan
2016-04-01
With the increasing role of photovoltaic power production, the need to accurately forecast and anticipate weather-driven elements like cloud cover has become ever more important. Of particular concern is forecasting on the short-term (up to several hours), for which the most recent full weather simulation may no longer provide the most accurate information in light of real-time satellite measurements. We discuss the application of the image correlation velocimetry technique described by Tokumaru & Dimotakis (1995) (for calculating flow fields from images) to measure deformations of various orders based on recent satellite imagery, with the goal of not only more accurately forecasting the advection of cloud structures, but their continued deformation as well.
Development and Application of a ZigBee-Based Building Energy Monitoring and Control System
Peng, Changhai
2014-01-01
Increasing in energy consumption, particularly with the ever-increasing growth and development of urban systems, has become a major concern in most countries. In this paper, the authors propose a cost-effective ZigBee-based building energy monitoring and control system (ZBEMCS), which is composed of a gateway, a base station, and sensors. Specifically, a new hardware platform for power sensor nodes is developed to perform both local/remote power parameter measurement and power on/off switching for electric appliances. The experimental results show that the ZBEMCS can easily monitor energy usage with a high level of accuracy. Two typical applications of ZBEMCS such as subentry metering and household metering of building energy are presented. The former includes lighting socket electricity, HVAC electricity, power electricity and special electricity. The latter includes household metering according to the campus's main function zone and each college or department. Therefore, this system can be used for energy consumption monitoring, long-term energy conservation planning, and the development of automated energy conservation for building applications. PMID:25254249
Development and application of a ZigBee-based building energy monitoring and control system.
Peng, Changhai; Qian, Kun
2014-01-01
Increasing in energy consumption, particularly with the ever-increasing growth and development of urban systems, has become a major concern in most countries. In this paper, the authors propose a cost-effective ZigBee-based building energy monitoring and control system (ZBEMCS), which is composed of a gateway, a base station, and sensors. Specifically, a new hardware platform for power sensor nodes is developed to perform both local/remote power parameter measurement and power on/off switching for electric appliances. The experimental results show that the ZBEMCS can easily monitor energy usage with a high level of accuracy. Two typical applications of ZBEMCS such as subentry metering and household metering of building energy are presented. The former includes lighting socket electricity, HVAC electricity, power electricity and special electricity. The latter includes household metering according to the campus's main function zone and each college or department. Therefore, this system can be used for energy consumption monitoring, long-term energy conservation planning, and the development of automated energy conservation for building applications.
Genome assembly reborn: recent computational challenges
2009-01-01
Research into genome assembly algorithms has experienced a resurgence due to new challenges created by the development of next generation sequencing technologies. Several genome assemblers have been published in recent years specifically targeted at the new sequence data; however, the ever-changing technological landscape leads to the need for continued research. In addition, the low cost of next generation sequencing data has led to an increased use of sequencing in new settings. For example, the new field of metagenomics relies on large-scale sequencing of entire microbial communities instead of isolate genomes, leading to new computational challenges. In this article, we outline the major algorithmic approaches for genome assembly and describe recent developments in this domain. PMID:19482960
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
Proteinortho: detection of (co-)orthologs in large-scale analysis.
Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J
2011-04-28
Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.
Biological insight, high-throughput datasets and the nature of neuro-degenerative disorders.
Valente, André X C N; Oliveira, Paulo J; Khaiboullina, Svetlana F; Palotás, András; Rizvanov, Albert A
2013-09-01
Life sciences are experiencing a historical shift towards a quantitative, data-rich regime. This transition has been associated with the advent of bio-informatics: mathematicians, physicists, computer scientists and statisticians are now commonplace in the field, working on the analysis of ever larger data-sets. An open question regarding what should drive scientific progress in this new era remains: will biological insight become increasingly irrelevant in a world of hypothesis-free, unbiased data analysis? This piece offers a different perspective, pin-pointing that biological thought is more-than-ever relevant in a data-rich setting. Some of the novel highthroughput information being acquired in the field of neuro-degenerative disorders is highlighted here. As but one example of how theory and experiment can interact in this new reality, our efforts in developing an idiopathic neuro-degenerative disease hematopoietic stemcell ageing theory are described.
A 500 A device characterizer utilizing a pulsed-linear amplifier
NASA Astrophysics Data System (ADS)
Lacouture, Shelby; Bayne, Stephen
2016-02-01
With the advent of modern power semiconductor switching elements, the envelope defining "high power" is an ever increasing quantity. Characterization of these semiconductor power devices generally falls into two categories: switching, or transient characteristics, and static, or DC characteristics. With the increasing native voltage and current levels that modern power devices are capable of handling, characterization equipment meant to extract quasi-static IV curves has not kept pace, often leaving researchers with no other option than to construct ad hoc curve tracers from disparate pieces of equipment. In this paper, a dedicated 10 V, 500 A curve tracer was designed and constructed for use with state of the art high power semiconductor switching and control elements. The characterizer is a physically small, pulsed power system at the heart of which is a relatively high power linear amplifier operating in a switched manner in order to deliver well defined square voltage pulses. These actively shaped pulses are used to obtain device's quasi-static DC characteristics accurately without causing any damage to the device tested. Voltage and current waveforms from each pulse are recorded simultaneously by two separate high-speed analog to digital converters and averaged over a specified interval to obtain points in the reconstructed IV graph.
EVER-EST: a virtual research environment for Earth Sciences
NASA Astrophysics Data System (ADS)
Marelli, Fulvio; Albani, Mirko; Glaves, Helen
2016-04-01
There is an increasing requirement for researchers to work collaboratively using common resources whilst being geographically dispersed. By creating a virtual research environment (VRE) using a service oriented architecture (SOA) tailored to the needs of Earth Science (ES) communities, the EVEREST project will provide a range of both generic and domain specific data management services to support a dynamic approach to collaborative research. EVER-EST will provide the means to overcome existing barriers to sharing of Earth Science data and information allowing research teams to discover, access, share and process heterogeneous data, algorithms, results and experiences within and across their communities, including those domains beyond Earth Science. Researchers will be able to seamlessly manage both the data involved in their computationally intensive disciplines and the scientific methods applied in their observations and modelling, which lead to the specific results that need to be attributable, validated and shared both within the community and more widely e.g. in the form of scholarly communications. Central to the EVEREST approach is the concept of the Research Object (RO) , which provides a semantically rich mechanism to aggregate related resources about a scientific investigation so that they can be shared together using a single unique identifier. Although several e-laboratories are incorporating the research object concept in their infrastructure, the EVER-EST VRE will be the first infrastructure to leverage the concept of Research Objects and their application in observational rather than experimental disciplines. Development of the EVEREST VRE will leverage the results of several previous projects which have produced state-of-the-art technologies for scientific data management and curation as well those which have developed models, techniques and tools for the preservation of scientific methods and their implementation in computational forms such as scientific workflows. The EVER-EST data processing infrastructure will be based on a Cloud Computing approach, in which new applications can be integrated using "virtual machines" that have their own specifications (disk size, processor speed, operating system etc.) and run on shared private (physical deployment over local hardware) or commercial Cloud infrastructures. The EVER-EST e-infrastructure will be validated by four virtual research communities (VRC) covering different multidisciplinary Earth Science domains including: ocean monitoring, natural hazards, land monitoring and risk management (volcanoes and seismicity). Each VRC will use the virtual research environment according to its own specific requirements for data, software, best practice and community engagement. This user-centric approach will allow an assessment to be made of the capability for the proposed solution to satisfy the heterogeneous needs of a variety of Earth Science communities for more effective collaboration, and higher efficiency and creativity in research. EVER-EST is funded by the European Commission's H2020 for three years starting in October 2015. The project is led by the European Space Agency (ESA), involves some of the major European Earth Science data providers/users including NERC, DLR, INGV, CNR and SatCEN.
Last chance for carbon capture and storage
NASA Astrophysics Data System (ADS)
Scott, Vivian; Gilfillan, Stuart; Markusson, Nils; Chalmers, Hannah; Haszeldine, R. Stuart
2013-02-01
Anthropogenic energy-related CO2 emissions are higher than ever. With new fossil-fuel power plants, growing energy-intensive industries and new sources of fossil fuels in development, further emissions increase seems inevitable. The rapid application of carbon capture and storage is a much heralded means to tackle emissions from both existing and future sources. However, despite extensive and successful research and development, progress in deploying carbon capture and storage has stalled. No fossil-fuel power plants, the greatest source of CO2 emissions, are using carbon capture and storage, and publicly supported demonstration programmes are struggling to deliver actual projects. Yet, carbon capture and storage remains a core component of national and global emissions-reduction scenarios. Governments have to either increase commitment to carbon capture and storage through much more active market support and emissions regulation, or accept its failure and recognize that continued expansion of power generation from burning fossil fuels is a severe threat to attaining objectives in mitigating climate change.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
MD-11 PCA - Research flight team egress
NASA Technical Reports Server (NTRS)
1995-01-01
This McDonnell Douglas MD-11 has parked on the flightline at NASA's Dryden Flight Research Center, Edwards, California, following its completion of the first and second landings ever performed by a transport aircraft under engine power only (on Aug. 29, 1995). The milestone flight, with NASA research pilot and former astronaut Gordon Fullerton at the controls, was part of a NASA project to develop a computer-assisted engine control system that enables a pilot to land a plane safely when its normal control surfaces are disabled. Coming down the steps from the aircraft are Gordon Fullerton (in front), followed by Bill Burcham, Propulsion Controlled Aircraft (PCA) project engineer at Dryden; NASA Dryden controls engineer John Burken; John Feather of McDonnell Douglas; and Drew Pappas, McDonnell Douglas' project manager for PCA.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
NASA Technical Reports Server (NTRS)
Jefferies, K. S.; Tew, R. C.
1974-01-01
A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.
Computer methods for sampling from the gamma distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, M.E.; Tadikamalla, P.R.
1978-01-01
Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.
ERIC Educational Resources Information Center
Fairlie, Robert W.; Robinson, Jonathan
2013-01-01
Computers are an important part of modern education, yet large segments of the population--especially low-income and minority children--lack access to a computer at home. Does this impede educational achievement? We test this hypothesis by conducting the largest-ever field experiment involving the random provision of free computers for home use to…
Sex differences on a computerized mental rotation task disappear with computer familiarization.
Roberts, J E; Bell, M A
2000-12-01
The area of cognitive research that has produced the most consistent sex differences is spatial ability. Particularly, men consistently perform better on mental rotation tasks than do women. This study examined the effects of familiarization with a computer on performance of a computerized two-dimensional mental rotation task. Two groups of college students (N=44) performed the rotation task, with one group performing a color-matching task that allowed them to be familiarized with the computer prior to the rotation task. Among the participants who only performed the rotation task, the 11 men performed better than the 11 women. Among the participants who performed the computer familiarization task before the rotation task, how ever, there were no sex differences on the mental rotation task between the 10 men and 12 women. These data indicate that sex differences on this two-dimensional task may reflect familiarization with the computer, not the mental rotation component of the task. Further research with larger samples and increased range of task difficulty is encouraged.
Visualization of Pulsar Search Data
NASA Astrophysics Data System (ADS)
Foster, R. S.; Wolszczan, A.
1993-05-01
The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.
RS-25 Engines Powered to Highest Level Ever During Stennis Test
2018-02-21
Operators powered NASA’s Space Launch System (SLS) engine to 113 percent thrust level, the highest RS-25 power level yet achieved, for 50 seconds of a 260-second test on February 21 at Stennis Space Center. This was the third full-duration test conducted on the A-1 Test Stand at Stennis this year.
RS-25 Engines Powered to Highest Level Ever during Stennis Test
2018-02-21
Operators powered NASA’s Space Launch System (SLS) engine to 113 percent thrust level, the highest RS-25 power level yet achieved, for 50 seconds of a 260-second test on February 21 at Stennis Space Center. This was the third full-duration test conducted on the A-1 Test Stand at Stennis this year.
78 FR 8506 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-06
...: 20130130-5140 Comments Due: 5 p.m. ET 2/20/13 Docket Numbers: ER13-828-000 Applicants: EverPower Wind... Beach, L.L.C., AES Laurel Mountain, LLC, AES Redondo Beach, L.L.C., Condon Wind Power, LLC, Lake Benton... Docket Numbers: ER13-827-000 Applicants: Carolina Power & Light Company Description: Service Agreement No...
Measurements by a Vector Network Analyzer at 325 to 508 GHz
NASA Technical Reports Server (NTRS)
Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony
2008-01-01
Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.
Constraints and Opportunities in GCM Model Development
NASA Technical Reports Server (NTRS)
Schmidt, Gavin; Clune, Thomas
2010-01-01
Over the past 30 years climate models have evolved from relatively simple representations of a few atmospheric processes to complex multi-disciplinary system models which incorporate physics from bottom of the ocean to the mesopause and are used for seasonal to multi-million year timescales. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Constraints of working within an ever evolving research code mean that most software changes must be incremental so as not to disrupt scientific throughput. Unfortunately, programming methodologies have generally not kept pace with these challenges, and existing implementations now present a heavy and growing burden on further model development as well as limiting flexibility and reliability. Opportunely, advances in software engineering from other disciplines (e.g. the commercial software industry) as well as new generations of powerful development tools can be incorporated by the model developers to incrementally and systematically improve underlying implementations and reverse the long term trend of increasing development overhead. However, these methodologies cannot be applied blindly, but rather must be carefully tailored to the unique characteristics of scientific software development. We will discuss the need for close integration of software engineers and climate scientists to find the optimal processes for climate modeling.
Design analysis of vertical wind turbine with airfoil variation
NASA Astrophysics Data System (ADS)
Maulana, Muhammad Ilham; Qaedy, T. Masykur Al; Nawawi, Muhammad
2016-03-01
With an ever increasing electrical energy crisis occurring in the Banda Aceh City, it will be important to investigate alternative methods of generating power in ways different than fossil fuels. In fact, one of the biggest sources of energy in Aceh is wind energy. It can be harnessed not only by big corporations but also by individuals using Vertical Axis Wind Turbines (VAWT). This paper presents a three-dimensional CFD analysis of the influence of airfoil design on performance of a Darrieus-type vertical-axis wind turbine (VAWT). The main objective of this paper is to develop an airfoil design for NACA 63-series vertical axis wind turbine, for average wind velocity 2,5 m/s. To utilize both lift and drag force, some of designs of airfoil are analyzed using a commercial computational fluid dynamics solver such us Fluent. Simulation is performed for this airfoil at different angles of attach rearranging from -12°, -8°, -4°, 0°, 4°, 8°, and 12°. The analysis showed that the significant enhancement in value of lift coefficient for airfoil NACA 63-series is occurred for NACA 63-412.
MIT-NASA Workshop: Transformational Technologies
NASA Technical Reports Server (NTRS)
Mankins, J. C. (Editor); Christensen, C. B.; Gresham, E. C.; Simmons, A.; Mullins, C. A.
2005-01-01
As a space faring nation, we are at a critical juncture in the evolution of space exploration. NASA has announced its Vision for Space Exploration, a vision of returning humans to the Moon, sending robots and eventually humans to Mars, and exploring the outer solar system via automated spacecraft. However, mission concepts have become increasingly complex, with the potential to yield a wealth of scientific knowledge. Meanwhile, there are significant resource challenges to be met. Launch costs remain a barrier to routine space flight; the ever-changing fiscal and political environments can wreak havoc on mission planning; and technologies are constantly improving, and systems that were state of the art when a program began can quickly become outmoded before a mission is even launched. This Conference Publication describes the workshop and featured presentations by world-class experts presenting leading-edge technologies and applications in the areas of power and propulsion; communications; automation, robotics, computing, and intelligent systems; and transformational techniques for space activities. Workshops such as this one provide an excellent medium for capturing the broadest possible array of insights and expertise, learning from researchers in universities, national laboratories, NASA field Centers, and industry to help better our future in space.
Aortic Valve Calcification and Risk of Stroke: The Rotterdam Study.
Bos, Daniel; Bozorgpourniazi, Atefeh; Mutlu, Unal; Kavousi, Maryam; Vernooij, Meike W; Moelker, Adriaan; Franco, Oscar H; Koudstaal, Peter J; Ikram, M Arfan; van der Lugt, Aad
2016-11-01
It remains uncertain whether aortic valve calcification (AVC) is a risk factor for stroke. From the population-based Rotterdam Study, 2471 participants (mean age: 69.6 years; 51.8% women) underwent computed tomography to quantify AVC. We assessed prevalent stroke and continuously monitored the remaining participants for the incidence of stroke. Logistic and Cox regression models were used to investigate associations of AVC with prevalent stroke and risk of incident stroke. AVC was present in 33.1% of people. At baseline, 97 participants had ever suffered a stroke. During 18 665 person-years of follow-up (mean: 7.9 years), 135 people experienced a first-ever stroke. The presence of AVC was not associated with prevalent stroke (fully adjusted odds ratio: 0.97 (95% confidence interval, 0.61-1.53]) or with an increased risk of stroke (fully adjusted hazard ratio: 0.99 (95% confidence interval, 0.69-1.44]). Although AVC is a common finding in middle-aged and elderly community-dwelling people, our results suggest that AVC is not associated with an increased risk of stroke. © 2016 American Heart Association, Inc.
Current algorithmic solutions for peptide-based proteomics data generation and identification.
Hoopmann, Michael R; Moritz, Robert L
2013-02-01
Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.
Energy Efficiency Maximization of Practical Wireless Communication Systems
NASA Astrophysics Data System (ADS)
Eraslan, Eren
Energy consumption of the modern wireless communication systems is rapidly growing due to the ever-increasing data demand and the advanced solutions employed in order to address this demand, such as multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. These MIMO systems are power hungry, however, they are capable of changing the transmission parameters, such as number of spatial streams, number of transmitter/receiver antennas, modulation, code rate, and transmit power. They can thus choose the best mode out of possibly thousands of modes in order to optimize an objective function. This problem is referred to as the link adaptation problem. In this work, we focus on the link adaptation for energy efficiency maximization problem, which is defined as choosing the optimal transmission mode to maximize the number of successfully transmitted bits per unit energy consumed by the link. We model the energy consumption and throughput performances of a MIMO-OFDM link and develop a practical link adaptation protocol, which senses the channel conditions and changes its transmission mode in real-time. It turns out that the brute force search, which is usually assumed in previous works, is prohibitively complex, especially when there are large numbers of transmit power levels to choose from. We analyze the relationship between the energy efficiency and transmit power, and prove that energy efficiency of a link is a single-peaked quasiconcave function of transmit power. This leads us to develop a low-complexity algorithm that finds a near-optimal transmit power and take this dimension out of the search space. We further prune the search space by analyzing the singular value decomposition of the channel and excluding the modes that use higher number of spatial streams than the channel can support. These algorithms and our novel formulations provide simpler computations and limit the search space into a much smaller set; hence reducing the computational complexity by orders of magnitude without sacrificing the performance. The result of this work is a highly practical link adaptation protocol for maximizing the energy efficiency of modern wireless communication systems. Simulation results show orders of magnitude gain in the energy efficiency of the link. We also implemented the link adaptation protocol on real-time MIMO-OFDM radios and we report on the experimental results. To the best of our knowledge, this is the first reported testbed that is capable of performing energy-efficient fast link adaptation using PHY layer information.
Changes in Electronic Cigarette Use from 2013 to 2015 and Reasons for Use among Finnish Adolescents.
Kinnunen, Jaana M; Ollila, Hanna; Lindfors, Pirjo L; Rimpelä, Arja H
2016-11-09
Electronic cigarettes are quite a new potential source of nicotine addiction among youth. More research is needed, particularly on e-liquid use and socioeconomic factors as potential determinants. We studied changes from 2013 to 2015 in adolescent e-cigarette awareness and ever-use, types of e-liquids, and determinants in Finland. In 2015, we studied weekly use and reasons for ever-use. Data were from two national surveys of 12-18-year-old Finns (2013, n = 3535, response rate 38%; 2015, n = 6698, 41%). Descriptive statistics and logistic regression analysis were used. Awareness and ever-use of e-cigarettes increased significantly from 2013 to 2015 in all age and gender groups. Ever-use increased from 17.4% to 25%, with half having tried nicotine e-liquids. In 2015, weekly use was rare (1.5%). Daily cigarette smoking was the strongest determinant (OR 51.75; 95% CI 38.18-70.14) for e-cigarette ever-use, as for e-cigarette weekly use, but smoking experimentation and ever-use of snus (Swedish type moist snuff) and waterpipes alongside parental smoking and poor academic achievement also increased the odds for ever-use. The most common reason behind e-cigarette ever-use was the desire to try something new. To conclude, adolescent e-cigarette ever-use is increasing, and also among never-smokers. Tobacco-related factors are stronger determinants for e-cigarette use than socioeconomic factors.
Changes in Electronic Cigarette Use from 2013 to 2015 and Reasons for Use among Finnish Adolescents
Kinnunen, Jaana M.; Ollila, Hanna; Lindfors, Pirjo L.; Rimpelä, Arja H.
2016-01-01
Electronic cigarettes are quite a new potential source of nicotine addiction among youth. More research is needed, particularly on e-liquid use and socioeconomic factors as potential determinants. We studied changes from 2013 to 2015 in adolescent e-cigarette awareness and ever-use, types of e-liquids, and determinants in Finland. In 2015, we studied weekly use and reasons for ever-use. Data were from two national surveys of 12–18-year-old Finns (2013, n = 3535, response rate 38%; 2015, n = 6698, 41%). Descriptive statistics and logistic regression analysis were used. Awareness and ever-use of e-cigarettes increased significantly from 2013 to 2015 in all age and gender groups. Ever-use increased from 17.4% to 25%, with half having tried nicotine e-liquids. In 2015, weekly use was rare (1.5%). Daily cigarette smoking was the strongest determinant (OR 51.75; 95% CI 38.18–70.14) for e-cigarette ever-use, as for e-cigarette weekly use, but smoking experimentation and ever-use of snus (Swedish type moist snuff) and waterpipes alongside parental smoking and poor academic achievement also increased the odds for ever-use. The most common reason behind e-cigarette ever-use was the desire to try something new. To conclude, adolescent e-cigarette ever-use is increasing, and also among never-smokers. Tobacco-related factors are stronger determinants for e-cigarette use than socioeconomic factors. PMID:27834885
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Multiscale Modeling of Damage Processes in Aluminum Alloys: Grain-Scale Mechanisms
NASA Technical Reports Server (NTRS)
Hochhalter, J. D.; Veilleux, M. G.; Bozek, J. E.; Glaessgen, E. H.; Ingraffea, A. R.
2008-01-01
This paper has two goals related to the development of a physically-grounded methodology for modeling the initial stages of fatigue crack growth in an aluminum alloy. The aluminum alloy, AA 7075-T651, is susceptible to fatigue cracking that nucleates from cracked second phase iron-bearing particles. Thus, the first goal of the paper is to validate an existing framework for the prediction of the conditions under which the particles crack. The observed statistics of particle cracking (defined as incubation for this alloy) must be accurately predicted to simulate the stochastic nature of microstructurally small fatigue crack (MSFC) formation. Also, only by simulating incubation of damage in a statistically accurate manner can subsequent stages of crack growth be accurately predicted. To maintain fidelity and computational efficiency, a filtering procedure was developed to eliminate particles that were unlikely to crack. The particle filter considers the distributions of particle sizes and shapes, grain texture, and the configuration of the surrounding grains. This filter helps substantially reduce the number of particles that need to be included in the microstructural models and forms the basis of the future work on the subsequent stages of MSFC, crack nucleation and microstructurally small crack propagation. A physics-based approach to simulating fracture should ultimately begin at nanometer length scale, in which atomistic simulation is used to predict the fundamental damage mechanisms of MSFC. These mechanisms include dislocation formation and interaction, interstitial void formation, and atomic diffusion. However, atomistic simulations quickly become computationally intractable as the system size increases, especially when directly linking to the already large microstructural models. Therefore, the second goal of this paper is to propose a method that will incorporate atomistic simulation and small-scale experimental characterization into the existing multiscale framework. At the microscale, the nanoscale mechanics are represented within cohesive zones where appropriate, i.e. where the mechanics observed at the nanoscale can be represented as occurring on a plane such as at grain boundaries or slip planes at a crack front. Important advancements that are yet to be made include: 1. an increased fidelity in cohesive zone modeling; 2. a means to understand how atomistic simulation scales with time; 3. a new experimental methodology for generating empirical models for CZMs and emerging materials; and 4. a validation of simulations of the damage processes at the nano-micro scale. With ever-increasing computer power, the long-term ability to employ atomistic simulation for the prognosis of structural components will not be limited by computation power, but by our lack of knowledge in incorporating atomistic models into simulations of MSFC into a multiscale framework.
An, Gary; Bartels, John; Vodovotz, Yoram
2011-03-01
The clinical translation of promising basic biomedical findings, whether derived from reductionist studies in academic laboratories or as the product of extensive high-throughput and -content screens in the biotechnology and pharmaceutical industries, has reached a period of stagnation in which ever higher research and development costs are yielding ever fewer new drugs. Systems biology and computational modeling have been touted as potential avenues by which to break through this logjam. However, few mechanistic computational approaches are utilized in a manner that is fully cognizant of the inherent clinical realities in which the drugs developed through this ostensibly rational process will be ultimately used. In this article, we present a Translational Systems Biology approach to inflammation. This approach is based on the use of mechanistic computational modeling centered on inherent clinical applicability, namely that a unified suite of models can be applied to generate in silico clinical trials, individualized computational models as tools for personalized medicine, and rational drug and device design based on disease mechanism.
BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs
Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen
2014-01-01
Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471
Data Structures for Extreme Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahan, Simon
As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less
The perils of power in interpretive research.
Cohn, Ellen S; Lyons, Kathleen Doyle
2003-01-01
Occupational therapy is based on core values of altruism, equality, and honoring the dignity of others. Embedded in these values is the ever-present negotiation of power. To honor the concern for the welfare of others, researchers are challenged to think about issues of power throughout the research process. This paper identifies dilemmas and raises questions researchers might ask themselves as they struggle to share power in the interpretive research process.
Natural Hazards in Your Community
ERIC Educational Resources Information Center
Martinez,Cindy
2004-01-01
The Earth is a powerful, active, and ever-changing planet. Earthquakes and volcanoes reshape the Earth's crust with sudden bursts of movement or with eruptions that last decades. Powerful storms develop in the swirling atmosphere, creating cumulonimbus thunderclouds, lightning storms, and even tornadoes or hurricanes. Geological features and moist…
Ethics in published brain-computer interface research
NASA Astrophysics Data System (ADS)
Specker Sullivan, L.; Illes, J.
2018-02-01
Objective. Sophisticated signal processing has opened the doors to more research with human subjects than ever before. The increase in the use of human subjects in research comes with a need for increased human subjects protections. Approach. We quantified the presence or absence of ethics language in published reports of brain-computer interface (BCI) studies that involved human subjects and qualitatively characterized ethics statements. Main results. Reports of BCI studies with human subjects that are published in neural engineering and engineering journals are anchored in the rationale of technological improvement. Ethics language is markedly absent, omitted from 31% of studies published in neural engineering journals and 59% of studies in biomedical engineering journals. Significance. As the integration of technological tools with the capacities of the mind deepens, explicit attention to ethical issues will ensure that broad human benefit is embraced and not eclipsed by technological exclusiveness.
The path towards sustainable energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Steven; Cui, Yi; Liu, Nian
Civilization continues to be transformed by our ability to harness energy beyond human and animal power. A series of industrial and agricultural revolutions have allowed an increasing fraction of the world population to heat and light their homes, fertilize and irrigate their crops, connect to one another and travel around the world. All of this progress is fuelled by our ability to find, extract and use energy with ever increasing dexterity. Lastly, research in materials science is contributing to progress towards a sustainable future based on clean energy generation, transmission and distribution, the storage of electrical and chemical energy, energymore » efficiency, and better energy management systems.« less
The path towards sustainable energy
Chu, Steven; Cui, Yi; Liu, Nian
2016-12-20
Civilization continues to be transformed by our ability to harness energy beyond human and animal power. A series of industrial and agricultural revolutions have allowed an increasing fraction of the world population to heat and light their homes, fertilize and irrigate their crops, connect to one another and travel around the world. All of this progress is fuelled by our ability to find, extract and use energy with ever increasing dexterity. Lastly, research in materials science is contributing to progress towards a sustainable future based on clean energy generation, transmission and distribution, the storage of electrical and chemical energy, energymore » efficiency, and better energy management systems.« less
AMS,Chang-Diaz works with computers in the middeck
2016-08-24
STS091-378-028 (2-12 June 1998) --- Astronaut Franklin R. Chang-Diaz, payload commander, inputs data on a laptop computer associated with the Alpha Magnetic Spectrometer (AMS) hardware located in the aft cargo bay. Reference JSC photo number STS091-367-033, which shows the hardware as seen from Russia's Mir space station, which was docked with Discovery at the time. AMS is the first large magnet experiment ever placed in Earth orbit. The scientific goal of this high-energy physics experiment is to increase our understanding of the composition and origin of the universe. It is designed to search for and measure charged particles, including antimatter, outside Earth's atmosphere. The charge of such particles can be identified by their trajectories in a magnetic field.
Blackford, Sarah
2018-04-01
With an ever more competitive global labour market, coupled with an ever-increasing population of PhD-qualified graduates, the ability to communicate effectively and build strategic connections with others can be advantageous in the job-search process. Whether in pursuit of a tenure-track or non-academic position, many postdoctoral researchers and PhD students will benefit from networking as early as possible to enhance their career prospects. Sometimes viewed cynically as 'using people' or dismissed as 'the old boy network,' the ability to make meaningful connections and build relationships can be more valuable than other job-related skills in order to gain entry to, and progress within, many professions. This mini-review highlights the positive influence of networking and how bioscience PhD students and postdoctoral researchers can harness the power of communities to achieve career success. It is argued that those who make connections and promote personal patronage through networking can gain an advantage over their contemporaries. A summary of key theories and research studies that underpin the practice of networking provides credence to these assertions, which are further substantiated with examples pertinent to the academic community. Although primarily focussed on the biosciences, much of the content is applicable to other scientists at a similar career stage.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less
An Infrastructure to Enable Lightweight Context-Awareness for Mobile Users
Curiel, Pablo; Lago, Ana B.
2013-01-01
Mobile phones enable us to carry out a wider range of tasks every day, and as a result they have become more ubiquitous than ever. However, they are still more limited in terms of processing power and interaction capabilities than traditional computers, and the often distracting and time-constricted scenarios in which we use them do not help in alleviating these limitations. Context-awareness is a valuable technique to address these issues, as it enables to adapt application behaviour to each situation. In this paper we present a context management infrastructure for mobile environments, aimed at controlling context information life-cycle in this kind of scenarios, with the main goal of enabling application and services to adapt their behaviour to better meet end-user needs. This infrastructure relies on semantic technologies and open standards to improve interoperability, and is based on a central element, the context manager. This element acts as a central context repository and takes most of the computational burden derived from dealing with this kind of information, thus relieving from these tasks to more resource-scarce devices in the system. PMID:23899932
Data centers as dispatchable loads to harness stranded power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kibaek; Yang, Fan; Zavala, Victor M.
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Data centers as dispatchable loads to harness stranded power
Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...
2016-07-20
Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
ERIC Educational Resources Information Center
Emery, Jill
2010-01-01
In August 2010, "Wired" magazine declared, "The Web is Dead. Long Live the Internet." Citing the rise of IPad & Smartphone sales and the rapid explosion of application-based software to run various programs on multiple computing devices--but especially mobile computing devices--people spend more hours than ever connected to or "on" the Internet…
The Ever-Present Demand for Public Computing Resources. CDS Spotlight
ERIC Educational Resources Information Center
Pirani, Judith A.
2014-01-01
This Core Data Service (CDS) Spotlight focuses on public computing resources, including lab/cluster workstations in buildings, virtual lab/cluster workstations, kiosks, laptop and tablet checkout programs, and workstation access in unscheduled classrooms. The findings are derived from 758 CDS 2012 participating institutions. A dataset of 529…
Characterization of lubrication oil emissions from aircraft engines.
Yu, Zhenhong; Liscinsky, David S; Winstead, Edward L; True, Bruce S; Timko, Michael T; Bhargava, Anuj; Herndon, Scott C; Miake-Lye, Richard C; Anderson, Bruce E
2010-12-15
In this first ever study, particulate matter (PM) emitted from the lubrication system overboard breather vent for two different models of aircraft engines has been systematically characterized. Lubrication oil was confirmed as the predominant component of the emitted particulate matter based upon the characteristic mass spectrum of the pure oil. Total particulate mass and size distributions of the emitted oil are also investigated by several high-sensitivity aerosol characterization instruments. The emission index (EI) of lubrication oil at engine idle is in the range of 2-12 mg kg(-1) and increases with engine power. The chemical composition of the oil droplets is essentially independent of engine thrust, suggesting that engine oil does not undergo thermally driven chemical transformations during the ∼4 h test window. Volumetric mean diameter is around 250-350 nm for all engine power conditions with a slight power dependence.
Planning chemical syntheses with deep neural networks and symbolic AI
NASA Astrophysics Data System (ADS)
Segler, Marwin H. S.; Preuss, Mike; Waller, Mark P.
2018-03-01
To plan the syntheses of small organic molecules, chemists use retrosynthesis, a problem-solving technique in which target molecules are recursively transformed into increasingly simpler precursors. Computer-aided retrosynthesis would be a valuable tool but at present it is slow and provides results of unsatisfactory quality. Here we use Monte Carlo tree search and symbolic artificial intelligence (AI) to discover retrosynthetic routes. We combined Monte Carlo tree search with an expansion policy network that guides the search, and a filter network to pre-select the most promising retrosynthetic steps. These deep neural networks were trained on essentially all reactions ever published in organic chemistry. Our system solves for almost twice as many molecules, thirty times faster than the traditional computer-aided search method, which is based on extracted rules and hand-designed heuristics. In a double-blind AB test, chemists on average considered our computer-generated routes to be equivalent to reported literature routes.
Performance optimization of a hybrid micro-grid based on double-loop MPPT and SVC-MERS
NASA Astrophysics Data System (ADS)
Wei, Yewen; Hou, Xilun; Zhang, Xiang; Xiong, Shengnan; Peng, Fei
2018-02-01
With ever-increasing concerns on environmental pollution and energy shortage, the development of renewable resource has attracted a lot of attention. This paper first reviews both the wind and photovoltaic (PV) generation techniques and approaches of micro-grid voltage control. Then, a novel islanded micro-grid, which consists of wind & PV generation and hybrid-energy storage device, is built for application to remote and isolated areas. For the PV power generation branch, a double- maximum power point tracking (MPPT) technique is developed to trace the sunlight and regulate the tilt angle of PV panels. For wind-power generation branch, squirrel cage induction generator (SCIG) is used as its simple structure, robustness and less cost. In order to stabilize the output voltage of SCIGs, a new Static Var Compensator named magnetic energy recovery switch (SVC-MERS) is applied. Finally, experimental results confirm that both of the proposed methods can improve the efficiency of PV power generation and voltage stability of the micro-grid, respectively.
Special Issue: Materials for Electrochemical Capacitors and Batteries.
Wang, Jian-Gan; Wei, Bingqing
2017-04-22
Electrochemical capacitors and rechargeable batteries have received worldwide attention due to their excellent energy storage capability for a variety of applications. The rapid development of these technologies is propelled by the advanced electrode materials and new energy storage systems. It is believed that research efforts can improve the device performance to meet the ever-increasing requirements of high energy density, high power density and long cycle life. This Special Issue aims to provide readers with a glimpse of different kinds of electrode materials for electrochemical capacitors and batteries.
Network Implementation Trade-Offs in Existing Homes
NASA Astrophysics Data System (ADS)
Keiser, Gerd
2013-03-01
The ever-increasing demand for networking of high-bandwidth services in existing homes has resulted in several options for implementing an in-home network. Among the options are power-line communication techniques, twisted-pair copper wires, wireless links, and plastic or glass optical fibers. Whereas it is easy to install high-bandwidth optical fibers during the construction of new living units, retrofitting of existing homes with networking capabilities requires some technology innovations. This article addresses some trade-offs that need to be made on what transmission media can be retrofitted most effectively in existing homes.
NASA Astrophysics Data System (ADS)
Daniel, Michael T.
Here in the early 21st century humanity is continuing to seek improved quality of life for citizens throughout the world. This global advancement is providing more people than ever with access to state-of-the-art services in areas such as transportation, entertainment, computing, communication, and so on. Providing these services to an ever-growing population while considering the constraints levied by continuing climate change will require new frontiers of clean energy to be developed. At the time of this writing, offshore wind has been proven as both a politically and economically agreeable source of clean, sustainable energy by northern European nations with many wind farms deployed in the North, Baltic, and Irish Seas. Modern offshore wind farms are equipped with an electrical system within the farm itself to aggregate the energy from all turbines in the farm before it is transmitted to shore. This collection grid is traditionally a 3-phase medium voltage alternating current (MVAC) system. Due to reactive power and other practical constraints, it is preferable to use a medium voltage direct current (MVDC) collection grid when siting farms >150 km from shore. To date, no offshore wind farm features an MVDC collection grid. However, MVDC collection grids are expected to be deployed with future offshore wind farms as they are sited further out to sea. In this work it is assumed that many future offshore wind farms may utilize an MVDC collection grid to aggregate electrical energy generated by individual wind turbines. As such, this work presents both per-phase and per-pole power electronic converter systems suitable for interfacing individual wind turbines to such an MVDC collection grid. Both interfaces are shown to provide high input power factor at the wind turbine while providing DC output current to the MVDC grid. Common mode voltage stress and circulating currents are investigated, and mitigation strategies are provided for both interfaces. A power sharing scheme for connecting multiple wind turbines in series to allow for a higher MVDC grid voltage is also proposed and analyzed. The overall results show that the proposed per-pole approach yields key advantages in areas of common mode voltage stress, circulating current, and DC link capacitance, making it the more appropriate choice of the two proposed interfaces for this application.
Silicon photonics for high-performance interconnection networks
NASA Astrophysics Data System (ADS)
Biberman, Aleksandr
2011-12-01
We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics enables new sets of opportunities that we can leverage for performance gains, as well as new sets of challenges that we must solve. Leveraging its inherent compatibility with standard fabrication techniques of the semiconductor industry, combined with its capability of dense integration with advanced microelectronics, silicon photonics also offers a clear path toward commercialization through low-cost mass-volume production. Combining empirical validations of feasibility, demonstrations of massive performance gains in large-scale systems, and the potential for commercial penetration of silicon photonics, the impact of this work will become evident in the many decades that follow.
Seeing the forest for the trees: Networked workstations as a parallel processing computer
NASA Technical Reports Server (NTRS)
Breen, J. O.; Meleedy, D. M.
1992-01-01
Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.
Protecting computer-based medical devices: defending against viruses and other threats.
2005-07-01
The increasing integration of computer hardware has exposed medical devices to greater risks than ever before. More and more devices rely on commercial off-the-shelf software and operating systems, which are vulnerable to the increasing proliferation of viruses and other malicious programs that target computers. Therefore, it is necessary for hospitals to take steps such as those outlined in this article to ensure that their computer-based devices are made safe and continue to remain safe in the future. Maintaining the security of medical devices requires planning, careful execution, and a commitment of resources. A team should be created to develop a process for surveying the security status of all computerized devices in the hospital and making sure that patches and other updates are applied as needed. These patches and updates should be approved by the medical system supplier before being implemented. The team should consider using virtual local area networks to isolate susceptible devices on the hospital's network. All security measures should be carefully documented, and the documentation should be kept up-to-date. Above all, care must be taken to ensure that medical device security involves a collaborative, supportive partnership between the hospital's information technology staff and biomedical engineering personnel.
Proteinortho: Detection of (Co-)orthologs in large-scale analysis
2011-01-01
Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987
Prediction and characterization of application power use in a high-performance computing environment
Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...
2017-02-27
Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.
Wenzel, H G; Bakken, I J; Johansson, A; Götestam, K G; Øren, Anita
2009-12-01
Computer games are the most advanced form of gaming. For most people, the playing is an uncomplicated leisure activity; however, for a minority the gaming becomes excessive and is associated with negative consequences. The aim of the present study was to investigate computer game-playing behaviour in the general adult Norwegian population, and to explore mental health problems and self-reported consequences of playing. The survey includes 3,405 adults 16 to 74 years old (Norway 2007, response rate 35.3%). Overall, 65.5% of the respondents reported having ever played computer games (16-29 years, 93.9%; 30-39 years, 85.0%; 40-59 years, 56.2%; 60-74 years, 25.7%). Among 2,170 players, 89.8% reported playing less than 1 hr. as a daily average over the last month, 5.0% played 1-2 hr. daily, 3.1% played 2-4 hr. daily, and 2.2% reported playing > 4 hr. daily. The strongest risk factor for playing > 4 hr. daily was being an online player, followed by male gender, and single marital status. Reported negative consequences of computer game playing increased strongly with average daily playing time. Furthermore, prevalence of self-reported sleeping problems, depression, suicide ideations, anxiety, obsessions/ compulsions, and alcohol/substance abuse increased with increasing playing time. This study showed that adult populations should also be included in research on computer game-playing behaviour and its consequences.
Chip-scale integrated optical interconnects: a key enabler for future high-performance computing
NASA Astrophysics Data System (ADS)
Haney, Michael; Nair, Rohit; Gu, Tian
2012-01-01
High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide a seamless interconnect fabric spanning the intra-
Hospital mainframe computer documentation of pharmacist interventions.
Schumock, G T; Guenette, A J; Clark, T; McBride, J M
1993-07-01
The hospital mainframe computer pharmacist intervention documentation system described has successfully facilitated the recording, communication, analysis, and reporting of interventions at our hospital. It has proven to be time efficient, accessible, and user-friendly from the standpoint of both the pharmacist and administrator. The advantages of this system greatly outweigh manual documentation and justify the initial time investment in its design and development. In the future, it is hoped that the system can have even broader impact. Intervention/recommendations documented can be made accessible to medical and nursing staff, and as such further increase interdepartmental communication. As pharmacists embrace the pharmaceutical care mandate, documenting interventions in patient care will continue to grow in importance. Complete documentation is essential if pharmacists are to assume responsibility for patient outcomes. With time being an ever-increasing premium, and with economic and human resources dwindling, an efficient and effective means of recording and tracking pharmacist interventions will become imperative for survival in the fiscally challenged health care arena. Documentation of pharmacist intervention using a hospital mainframe computer at UIH has proven both efficient and effective.
Norton, Tomás; Sun, Da-Wen; Grant, Jim; Fallon, Richard; Dodd, Vincent
2007-09-01
The application of computational fluid dynamics (CFD) in the agricultural industry is becoming ever more important. Over the years, the versatility, accuracy and user-friendliness offered by CFD has led to its increased take-up by the agricultural engineering community. Now CFD is regularly employed to solve environmental problems of greenhouses and animal production facilities. However, due to a combination of increased computer efficacy and advanced numerical techniques, the realism of these simulations has only been enhanced in recent years. This study provides a state-of-the-art review of CFD, its current applications in the design of ventilation systems for agricultural production systems, and the outstanding challenging issues that confront CFD modellers. The current status of greenhouse CFD modelling was found to be at a higher standard than that of animal housing, owing to the incorporation of user-defined routines that simulate crop biological responses as a function of local environmental conditions. Nevertheless, the most recent animal housing simulations have addressed this issue and in turn have become more physically realistic.
Use of computers in dysmorphology.
Diliberti, J H
1988-01-01
As a consequence of the increasing power and decreasing cost of digital computers, dysmorphologists have begun to explore a wide variety of computerised applications in clinical genetics. Of considerable interest are developments in the areas of syndrome databases, expert systems, literature searches, image processing, and pattern recognition. Each of these areas is reviewed from the perspective of the underlying computer principles, existing applications, and the potential for future developments. Particular emphasis is placed on the analysis of the tasks performed by the dysmorphologist and the design of appropriate tools to facilitate these tasks. In this context the computer and associated software are considered paradigmatically as tools for the dysmorphologist and should be designed accordingly. Continuing improvements in the ability of computers to manipulate vast amounts of data rapidly makes the development of increasingly powerful tools for the dysmorphologist highly probable. PMID:3050092
Interpreting signals from astrophysical transient experiments.
O'Brien, Paul T; Smartt, Stephen J
2013-06-13
Time-domain astronomy has come of age with astronomers now able to monitor the sky at high cadence, both across the electromagnetic spectrum and using neutrinos and gravitational waves. The advent of new observing facilities permits new science, but the ever-increasing throughput of facilities demands efficient communication of coincident detections and better subsequent coordination among the scientific community so as to turn detections into scientific discoveries. To discuss the revolution occurring in our ability to monitor the Universe and the challenges it brings, on 25-26 April 2012, a group of scientists from observational and theoretical teams studying transients met with representatives of the major international transient observing facilities at the Kavli Royal Society International Centre, UK. This immediately followed the Royal Society Discussion Meeting 'New windows on transients across the Universe' held in London. Here, we present a summary of the Kavli meeting at which the participants discussed the science goals common to the transient astronomy community and analysed how to better meet the challenges ahead as ever more powerful observational facilities come on stream.
[The future of radiology: What can we expect within the next 10 years?].
Nensa, F; Forsting, M; Wetter, A
2016-03-01
More than other medical discipline, radiology is marked by technical innovation and continuous development, as well as the optimization of the underlying physical principles. In this respect, several trends that will crucially change and develop radiology over the next decade can be observed. Through the use of ever faster computer tomography, which also shows an ever-decreasing radiation exposure, the "workhorse" of radiology will have an even greater place and displace conventional X‑ray techniques further. In addition, hybrid imaging, which is based on a combination of nuclear medicine and radiological techniques (keywords: PET/CT, PET/MRI) will become much more established and, in particular, will improve oncological imaging further, allowing increasingly individualized imaging for specific tracers and techniques of functional magnetic resonance imaging for a particular tumour. Future radiology will be strongly characterized by innovations in the software and Internet industry, which will enable new image viewing and processing methods and open up new possibilities in the context of the organization of radiological work.
The professional responsibility model of physician leadership.
Chervenak, Frank A; McCullough, Laurence B; Brent, Robert L
2013-02-01
The challenges physician leaders confront today call to mind Odysseus' challenge to steer his fragile ship successfully between Scylla and Charybdis. The modern Scylla takes the form of ever-increasing pressures to provide more resources for professional liability, compliance, patient satisfaction, central administration, and a host of other demands. The modern Charybdis takes the form of ever-increasing pressures to procure resources when fewer are available and competition is continuously increasing the need for resources, including managed care, hospital administration, payers, employers, patients who are uninsured or underinsured, research funding, and philanthropy. This publication provides physician leaders with guidance for identifying and managing common leadership challenges on the basis of the professional responsibility model of physician leadership. This model is based on Plato's concept of leadership as a life of service and the professional medical ethics of Drs John Gregory and Thomas Percival. Four professional virtues should guide physician leaders: self-effacement, self-sacrifice, compassion, and integrity. These professional virtues direct physician leaders to treat colleagues as ends in themselves, to provide justice-based resource management, to use power constrained by medical professionalism, and to prevent and respond effectively to organizational dysfunction. The professional responsibility model guides physician leaders by proving an explicit "tool kit" to complement managerial skills. Copyright © 2013 Mosby, Inc. All rights reserved.
AnnotCompute: annotation-based exploration and meta-analysis of genomics experiments
Zheng, Jie; Stoyanovich, Julia; Manduchi, Elisabetta; Liu, Junmin; Stoeckert, Christian J.
2011-01-01
The ever-increasing scale of biological data sets, particularly those arising in the context of high-throughput technologies, requires the development of rich data exploration tools. In this article, we present AnnotCompute, an information discovery platform for repositories of functional genomics experiments such as ArrayExpress. Our system leverages semantic annotations of functional genomics experiments with controlled vocabulary and ontology terms, such as those from the MGED Ontology, to compute conceptual dissimilarities between pairs of experiments. These dissimilarities are then used to support two types of exploratory analysis—clustering and query-by-example. We show that our proposed dissimilarity measures correspond to a user's intuition about conceptual dissimilarity, and can be used to support effective query-by-example. We also evaluate the quality of clustering based on these measures. While AnnotCompute can support a richer data exploration experience, its effectiveness is limited in some cases, due to the quality of available annotations. Nonetheless, tools such as AnnotCompute may provide an incentive for richer annotations of experiments. Code is available for download at http://www.cbil.upenn.edu/downloads/AnnotCompute. Database URL: http://www.cbil.upenn.edu/annotCompute/ PMID:22190598
ERIC Educational Resources Information Center
Ravitch, Sharon M.
2014-01-01
Within the ever-developing, intersecting, and overlapping contexts of globalization, top-down policy, mandates, and standardization of public and higher education, many conceptualize and position practitioner research as a powerful stance and a tool of social, communal, and educational transformation, a set of methodological processes that…
Sex and Power in the Office: An Overview of Gender and Executive Power Perceptions in Organizations.
ERIC Educational Resources Information Center
Winsor, Jerry L.
An examination of recent literature concerning differing male and female socializations reveals a number of implications and suggestions for changing some negative executive attitudes regarding female executive skills. While more women are in executive positions than ever before, women are still at a disadvantage because the productive…
Carmichael, Clare; Carmichael, Patrick
2014-01-01
This paper highlights aspects related to current research and thinking about ethical issues in relation to Brain Computer Interface (BCI) and Brain-Neuronal Computer Interfaces (BNCI) research through the experience of one particular project, BrainAble, which is exploring and developing the potential of these technologies to enable people with complex disabilities to control computers. It describes how ethical practice has been developed both within the multidisciplinary research team and with participants. The paper presents findings in which participants shared their views of the project prototypes, of the potential of BCI/BNCI systems as an assistive technology, and of their other possible applications. This draws attention to the importance of ethical practice in projects where high expectations of technologies, and representations of "ideal types" of disabled users may reinforce stereotypes or drown out participant "voices". Ethical frameworks for research and development in emergent areas such as BCI/BNCI systems should be based on broad notions of a "duty of care" while being sufficiently flexible that researchers can adapt project procedures according to participant needs. They need to be frequently revisited, not only in the light of experience, but also to ensure they reflect new research findings and ever more complex and powerful technologies.
Artificial intelligence. Fears of an AI pioneer.
Russell, Stuart; Bohannon, John
2015-07-17
From the enraged robots in the 1920 play R.U.R. to the homicidal computer H.A.L. in 2001: A Space Odyssey, science fiction writers have embraced the dark side of artificial intelligence (AI) ever since the concept entered our collective imagination. Sluggish progress in AI research, especially during the “AI winter” of the 1970s and 1980s, made such worries seem far-fetched. But recent breakthroughs in machine learning and vast improvements in computational power have brought a flood of research funding— and fresh concerns about where AI may lead us. One researcher now speaking up is Stuart Russell, a computer scientist at the University of California, Berkeley, who with Peter Norvig, director of research at Google, wrote the premier AI textbook, Artificial Intelligence: A Modern Approach, now in its third edition. Last year, Russell joined the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom as an AI expert focusing on “risks that could lead to human extinction.” Among his chief concerns, which he aired at an April meeting in Geneva, Switzerland, run by the United Nations, is the danger of putting military drones and weaponry under the full control of AI systems. This interview has been edited for clarity and brevity.
Computer hardware for radiologists: Part 2
Indrajit, IK; Alam, A
2010-01-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future. PMID:21423895
Dysphagia and Obstructive Sleep Apnea in Acute, First-Ever, Ischemic Stroke.
Losurdo, Anna; Brunetti, Valerio; Broccolini, Aldobrando; Caliandro, Pietro; Frisullo, Giovanni; Morosetti, Roberta; Pilato, Fabio; Profice, Paolo; Giannantoni, Nadia Mariagrazia; Sacchetti, Maria Luisa; Testani, Elisa; Vollono, Catello; Della Marca, Giacomo
2018-03-01
Obstructive sleep apnea (OSA) and dysphagia are common in acute stroke and are both associated with increased risk of complications and worse prognosis. The aims of the present study were (1) to evaluate the prevalence of OSA and dysphagia in patients with acute, first-ever, ischemic stroke; (2) to investigate their clinical correlates; and (3) to verify if these conditions are associated in acute ischemic stroke. We enrolled a cohort of 140 consecutive patients with acute-onset (<48 hours), first-ever ischemic stroke. Computed tomography (CT) and magnetic resonance imaging scans confirmed the diagnosis. Neurological deficit was measured using the National Institutes of Health Stroke Scale (NIHSS) by examiners trained and certified in the use of this scale. Patients underwent a clinical evaluation of dysphagia (Gugging Swallowing Screen) and a cardiorespiratory sleep study to evaluate the presence of OSA. There are 72 patients (51.4%) with obstructive sleep apnea (OSA+), and there are 81 patients (57.8%) with dysphagia (Dys+). OSA+ patients were significantly older (P = .046) and had greater body mass index (BMI) (P = .002), neck circumference (P = .001), presence of diabetes (P = .013), and hypertension (P < .001). Dys+ patients had greater NIHSS (P < .001), lower Alberta Stroke Programme Early CT Score (P < .001), with greater BMI (P = .030). The association of OSA and dysphagia was greater than that expected based on the prevalence of each condition in acute stroke (P < .001). OSA and dysphagia are associated in first-ever, acute ischemic stroke. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.
High power disk lasers: advances and applications
NASA Astrophysics Data System (ADS)
Havrilla, David; Holzer, Marco
2011-02-01
Though the genesis of the disk laser concept dates to the early 90's, the disk laser continues to demonstrate the flexibility and the certain future of a breakthrough technology. On-going increases in power per disk, and improvements in beam quality and efficiency continue to validate the genius of the disk laser concept. As of today, the disk principle has not reached any fundamental limits regarding output power per disk or beam quality, and offers numerous advantages over other high power resonator concepts, especially over monolithic architectures. With well over 1000 high power disk lasers installations, the disk laser has proven to be a robust and reliable industrial tool. With advancements in running cost, investment cost and footprint, manufacturers continue to implement disk laser technology with more vigor than ever. This paper will explain important details of the TruDisk laser series and process relevant features of the system, like pump diode arrangement, resonator design and integrated beam guidance. In addition, advances in applications in the thick sheet area and very cost efficient high productivity applications like remote welding, remote cutting and cutting of thin sheets will be discussed.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.; Geng, Steven M.; Pearson, J. Boise; Godfroy, Thomas J.
2010-01-01
As a step towards development of Stirling power conversion for potential use in Fission Surface Power (FSP) systems, a pair of commercially available 1 kW class free-piston Stirling convertors was modified to operate with a NaK liquid metal pumped loop for thermal energy input. This was the first-ever attempt at powering a free-piston Stirling engine with a pumped liquid metal heat source and is a major FSP project milestone towards demonstrating technical feasibility. The tests included performance mapping the convertors over various hot and cold-end temperatures, piston amplitudes and NaK flow rates; and transient test conditions to simulate various start-up and fault scenarios. Performance maps of the convertors generated using the pumped NaK loop for thermal input show increases in power output over those measured during baseline testing using electric heating. Transient testing showed that the Stirling convertors can be successfully started in a variety of different scenarios and that the convertors can recover from a variety of fault scenarios.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.; Geng, Steven M.; Pearson, J. Boise; Godfroy, Thomas J.
2010-01-01
As a step towards development of Stirling power conversion for potential use in Fission Surface Power (FSP) systems, a pair of commercially available 1 kW class free-piston Stirling convertors was modified to operate with a NaK liquid metal pumped loop for thermal energy input. This was the first-ever attempt at powering a free-piston Stirling engine with a pumped liquid metal heat source and is a major FSP project milestone towards demonstrating technical feasibility. The tests included performance mapping the convertors over various hot and cold-end temperatures, piston amplitudes and NaK flow rates; and transient test conditions to simulate various start-up and fault scenarios. Performance maps of the convertors generated using the pumped NaK loop for thermal input show increases in power output over those measured during baseline testing using electric heating. Transient testing showed that the Stirling convertors can be successfully started in a variety of different scenarios and that the convertors can recover from a variety of fault scenarios.
Designing for Ab Initio Blended Learning Environments: Identifying Systemic Contradictions
ERIC Educational Resources Information Center
Ó Doinn, Oisín
2017-01-01
In recent years, Computer Assisted Language Learning (CALL) has become more accessible than ever before. This is largely due to the proliferation of mobile computing devices and the growth of open online language-learning resources. Additionally, since the beginning of the millennium there has been massive growth in the number of students studying…
Tools and Trends in Self-Paced Language Instruction
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2007-01-01
Ever since the PLATO system of the 1960's, CALL (computer assisted language learning) has had a major focus on providing self-paced, auto-correcting exercises for language learners to practice their skills and improve their knowledge of discrete areas of language learning. The computer has been recognized from the beginning as a patient and…
ERIC Educational Resources Information Center
Gattiker, Urs E.; And Others
It is expected that by 1990 the majority of clerical and managerial workers in North America will use computers in their daily work. An integrative model was developed which views quality of work life as an ever changing dimension influenced by computerization and by perception of career success and non-work factors. To test this model, a study…
An, Gary; Bartels, John; Vodovotz, Yoram
2011-01-01
The clinical translation of promising basic biomedical findings, whether derived from reductionist studies in academic laboratories or as the product of extensive high-throughput and –content screens in the biotechnology and pharmaceutical industries, has reached a period of stagnation in which ever higher research and development costs are yielding ever fewer new drugs. Systems biology and computational modeling have been touted as potential avenues by which to break through this logjam. However, few mechanistic computational approaches are utilized in a manner that is fully cognizant of the inherent clinical realities in which the drugs developed through this ostensibly rational process will be ultimately used. In this article, we present a Translational Systems Biology approach to inflammation. This approach is based on the use of mechanistic computational modeling centered on inherent clinical applicability, namely that a unified suite of models can be applied to generate in silico clinical trials, individualized computational models as tools for personalized medicine, and rational drug and device design based on disease mechanism. PMID:21552346
Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.
Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid
2017-03-01
The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N /A
The U.S. Department of Energy (DOE) proposes to consent to a proposal by the Puerto Rico Electric Power Authority (PREPA) to allow public access to the Boiling Nuclear Superheat (BONUS) reactor building located near Rincon, Puerto Rico for use as a museum. PREPA, the owner of the BONUS facility, has determined that the historical significance of this facility, as one of only two reactors of this design ever constructed in the world, warrants preservation in a museum, and that this museum would provide economic benefits to the local community through increased tourism. Therefore, PREPA is proposing development of the BONUSmore » facility as a museum.« less
Schmidhuber, Jürgen
2013-01-01
Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang
The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less
Critical Reading: Visual Skills.
ERIC Educational Resources Information Center
Adams, Dennis M.
The computer controlled visual media, particularly television, are becoming an increasingly powerful instrument for the manipulation of thought. Powerful visual images increasingly reflect and shape personal and external reality--politics being one such example--and it is crucial that the viewing public understand the nature of these media…
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas
2008-01-01
A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.
1997-09-29
This is one of the highest resolution images ever recorded of Jupiter temperature field. It was obtained by NASA Galileo mission. This map, shown in the lower panel, indicates the forces powering Jovian winds.
Dosimetric Considerations in Radioimmunotherapy and Systemic Radionuclide Therapies: A Review
Loke, Kelvin S. H.; Padhy, Ajit K.; Ng, David C. E.; Goh, Anthony S.W.; Divgi, Chaitanya
2011-01-01
Radiopharmaceutical therapy, once touted as the “magic bullet” in radiation oncology, is increasingly being used in the treatment of a variety of malignancies; albeit in later disease stages. With ever-increasing public and medical awareness of radiation effects, radiation dosimetry is becoming more important. Dosimetry allows administration of the maximum tolerated radiation dose to the tumor/organ to be treated but limiting radiation to critical organs. Traditional tumor dosimetry involved acquiring pretherapy planar scans and plasma estimates with a diagnostic dose of intended radiopharmaceuticals. New advancements in single photon emission computed tomography and positron emission tomography systems allow semi-quantitative measurements of radiation dosimetry thus allowing treatments tailored to each individual patient. PMID:22144871
Searching for Extraterrestrial Intelligence with the Square Kilometre Array
NASA Astrophysics Data System (ADS)
Siemion, A.; Benford, J.; Cheng-Jin, J.; Chennamangalam, J.; Cordes, J. M.; Falcke, H. D. E.; Garrington, S. T.; Garrett, M. A.; Gurvits, L.; Hoare, M.; Korpela, E.; Lazio, J.; Messerschmitt, D.; Morrison, I.; O'Brien, T.; Paragi, Z.; Penny, A.; Spitler, L.; Tarter, J.; Werthimer, D.
2015-04-01
The vast collecting area of the Square Kilometre Array (SKA), harnessed by sensitive receivers, flexible digital electronics and increased computational capacity, could permit the most sensitive and exhaustive search for technologically-produced radio emission from advanced extraterrestrial intelligence (SETI) ever performed. For example, SKA1-MID will be capable of detecting a source roughly analogous to terrestrial high-power radars (e.g. air route surveillance or ballistic missile warning radars, EIRP (EIRP = equivalent isotropic radiated power, ~10^17 erg sec^-1) at 10 pc in less than 15 minutes, and with a modest four beam SETI observing system could, in one minute, search every star in the primary beam out to ~100 pc for radio emission comparable to that emitted by the Arecibo Planetary Radar (EIRP ~2 x 10^20 erg sec^-1). The flexibility of the signal detection systems used for SETI searches with the SKA will allow new algorithms to be employed that will provide sensitivity to a much wider variety of signal types than previously searched for. Here we discuss the astrobiological and astrophysical motivations for radio SETI and describe how the technical capabilities of the SKA will explore the radio SETI parameter space. We detail several conceivable SETI experimental programs on all components of SKA1, including commensal, primary-user, targeted and survey programs and project the enhancements to them possible with SKA2. We also discuss target selection criteria for these programs, and in the case of commensal observing, how the varied use cases of other primary observers can be used to full advantage for SETI.
Leaching of heavy metals from E-waste in simulated landfill columns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yadong; Richardson, Jay B.; Mark Bricka, R.
2009-07-15
In recent history the volume of electronic products purchased by consumers has dramatically escalated. As a result this has produced an ever-increasing electronic waste (E-waste) stream, which has generated concerns regarding the E-waste's potential for adversely impacting the environment. The leaching of toxic substances from obsolete personal computers (PCs) and cathode ray tubes (CRTs) of televisions and monitors, which are the most significant components in E-waste stream, was studied using landfill simulation in columns. Five columns were employed. One column served as a control which was filled with municipal solid waste (MSW), two columns were filled with a mixture ofmore » MSW and CRTs, and the other two were filled with MSW and computer components including printed wire boards, hard disc drives, floppy disc drives, CD/DVD drives, and power supply units. The leachate generated from the columns was monitored for toxic materials throughout the two-year duration of the study. Results indicate that lead (Pb) and various other heavy metals that were of environmental and health concern were not detected in the leachate from the simulators. When the samples of the solids were collected from underneath the E-waste in the columns and were analyzed, significant amount of Pb was detected. This indicates that Pb could readily leach from the E-waste, but was absorbed by the solids around the E-waste materials. While Pb was not observed in the leachate in this study, it is likely that the Pb would eventually enter the leachate after a long term transport.« less
Plant, Richard R; Turner, Garry
2009-08-01
Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.
Use of UNIX in large online processor farms
NASA Astrophysics Data System (ADS)
Biel, Joseph R.
1990-08-01
There has been a recent rapid increase in the power of RISC computers running the UNIX operating system. Fermilab has begun to make use of these computers in the next generation of offline computer farms. It is also planning to use such computers in online computer farms. Issues involved in constructing online UNIX farms are discussed.
Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth
2017-09-13
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.
NASA Astrophysics Data System (ADS)
Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth
2017-09-01
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.
Hubble 2006: Science Year in Review
NASA Technical Reports Server (NTRS)
Brown, R.
2007-01-01
The 10 science articles selected for this years annual science report exemplify the range of Hubble research from the Solar System, across our Milky Way, and on to distant galaxies. The objects of study include a new feature on Jupiter, binaries in the Kuiper Belt, Cepheid variable stars, the Orion Nebula, distant transiting planets, lensing galaxies, active galactic nuclei, red-and-dead galaxies, and galactic outflows and jets. Each narrative strives to construct the readers understanding of the topics and issues, and to place the latest research in historical, as well as scientific, context. These essays reveal trends in the practice of astronomy. More powerful computers are permitting astronomers to study ever larger data sets, enabling the discovery of subtle effects and rare objects. (Two investigations created mosaic images that are among the largest produced to date.) Multiwavelength data sets from ground-based telescopes, as well as other great observatories Spitzer and Chandraare increasingly important for holistic interpretations of Hubble results. This yearbook also presents profiles of 12 individuals who work with Hubble, or Hubble data, on a daily basis. They are representative of the many students, scientists, engineers, and other professions who are proudly associated with Hubble. Their stories collectively communicate the excitement and reward of careers related to space science and technology.
ERIC Educational Resources Information Center
American School Board Journal, 1972
1972-01-01
A special study of school bus transportation that (1) describes board responsibility for bus transportation (2) discusses the merits of buying, leasing, or contracting for buses; (3) points out the inadequacy of State safety requirements; and (4) presents the merits of L.P.-gas powered and diesel powered buses. (JF)
Hidden in Plain Sight: Signs of Great Power War
2016-06-01
China can change the strategic balance of power in the region without ever fighting—the acme of skill according to Sun Tzu . In fact, the balance of...68 BIBLIOGRAPHY ...put anyone into the shade, but we demand a place for ourselves in the sun .”20 This speech became the ideological foundation for Germany’s
ERIC Educational Resources Information Center
Suo, Shuguang
2013-01-01
Organizations have been forced to rethink business models and restructure facilities through IT innovation as they have faced the challenges arising from globalization, mergers and acquisitions, big data, and the ever-changing demands of customers. Cloud computing has emerged as a new computing paradigm that has fundamentally shaped the business…
2005-03-01
computing equipment, the idea of computer security has also become embedded in our society. Ever since the Michelangelo virus of 1992, when...Bibliography TheWorldwide Michelangelo Virus Scare of 1992. Retrieved February 2, 2004 from http://www.vmyths.com/fas/fas_inc/inc1.cfm Allen, J
Automatic maintenance payload on board of a Mexican LEO microsatellite
NASA Astrophysics Data System (ADS)
Vicente-Vivas, Esaú; García-Nocetti, Fabián; Mendieta-Jiménez, Francisco
2006-02-01
Few research institutions from Mexico work together to finalize the integration of a technological demonstration microsatellite called Satex, aiming the launching of the first ever fully designed and manufactured domestic space vehicle. The project is based on technical knowledge gained in previous space experiences, particularly in developing GASCAN automatic experiments for NASA's space shuttle, and in some support obtained from the local team which assembled the México-OSCAR-30 microsatellites. Satex includes three autonomous payloads and a power subsystem, each one with a local microcomputer to provide intelligent and dedicated control. It also contains a flight computer (FC) with a pair of full redundancies. This enables the remote maintenance of processing boards from the ground station. A fourth communications payload depends on the flight computer for control purposes. A fifth payload was decided to be developed for the satellite. It adds value to the available on-board computers and extends the opportunity for a developing country to learn and to generate domestic space technology. Its aim is to provide automatic maintenance capabilities for the most critical on-board computer in order to achieve continuous satellite operations. This paper presents the virtual computer architecture specially developed to provide maintenance capabilities to the flight computer. The architecture is periodically implemented by software with a small amount of physical processors (FC processors) and virtual redundancies (payload processors) to emulate a hybrid redundancy computer. Communications among processors are accomplished over a fault-tolerant LAN. This allows a versatile operating behavior in terms of data communication as well as in terms of distributed fault tolerance. Obtained results, payload validation and reliability results are also presented.
Learning phase transitions by confusion
NASA Astrophysics Data System (ADS)
van Nieuwenburg, Evert P. L.; Liu, Ye-Hua; Huber, Sebastian D.
2017-02-01
Classifying phases of matter is key to our understanding of many problems in physics. For quantum-mechanical systems in particular, the task can be daunting due to the exponentially large Hilbert space. With modern computing power and access to ever-larger data sets, classification problems are now routinely solved using machine-learning techniques. Here, we propose a neural-network approach to finding phase transitions, based on the performance of a neural network after it is trained with data that are deliberately labelled incorrectly. We demonstrate the success of this method on the topological phase transition in the Kitaev chain, the thermal phase transition in the classical Ising model, and the many-body-localization transition in a disordered quantum spin chain. Our method does not depend on order parameters, knowledge of the topological content of the phases, or any other specifics of the transition at hand. It therefore paves the way to the development of a generic tool for identifying unexplored phase transitions.
Learning phase transitions by confusion
NASA Astrophysics Data System (ADS)
van Nieuwenburg, Evert; Liu, Ye-Hua; Huber, Sebastian
Classifying phases of matter is a central problem in physics. For quantum mechanical systems, this task can be daunting owing to the exponentially large Hilbert space. Thanks to the available computing power and access to ever larger data sets, classification problems are now routinely solved using machine learning techniques. Here, we propose to use a neural network based approach to find transitions depending on the performance of the neural network after training it with deliberately incorrectly labelled data. We demonstrate the success of this method on the topological phase transition in the Kitaev chain, the thermal phase transition in the classical Ising model, and the many-body-localization transition in a disordered quantum spin chain. Our method does not depend on order parameters, knowledge of the topological content of the phases, or any other specifics of the transition at hand. It therefore paves the way to a generic tool to identify unexplored transitions.
Single photon detection of 1.5 THz radiation with the quantum capacitance detector
NASA Astrophysics Data System (ADS)
Echternach, P. M.; Pepper, B. J.; Reck, T.; Bradford, C. M.
2018-01-01
Far-infrared spectroscopy can reveal secrets of galaxy evolution and heavy-element enrichment throughout cosmic time, prompting astronomers worldwide to design cryogenic space telescopes for far-infrared spectroscopy. The most challenging aspect is a far-infrared detector that is both exquisitely sensitive (limited by the zodiacal-light noise in a narrow wavelength band, λ/Δλ 1,000) and array-able to tens of thousands of pixels. We present the quantum capacitance detector, a superconducting device adapted from quantum computing applications in which photon-produced free electrons in a superconductor tunnel into a small capacitive island embedded in a resonant circuit. The quantum capacitance detector has an optically measured noise equivalent power below 10-20 W Hz-1/2 at 1.5 THz, making it the most sensitive far-infrared detector ever demonstrated. We further demonstrate individual far-infrared photon counting, confirming the excellent sensitivity and suitability for cryogenic space astrophysics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corliss, William R
1967-01-01
This booklet relates how wires, transistors, and human ingenuity are combined to produce machines that surpass all the calculating prodigies that ever lived in speed, accuracy, and stamina, though perhaps not in the matter of mystery.
Artificial-life researchers try to create social reality.
Flam, F
1994-08-12
Some scientists, among them cosmologist Stephen Hawking, argue that computer viruses are alive. A better case might be made for many of the self-replicating silicon-based creatures featured at the fourth Conference on Artificial Life, held on 5 to 8 July in Boston. Researchers from computer science, biology, and other disciplines presented computer programs that, among other things, evolved cooperative strategies in a selfish world and recreated themselves in ever more complex forms.
Military application of flat panel displays in the Vetronics Technology Testbed prototype vehicle
NASA Astrophysics Data System (ADS)
Downs, Greg; Roller, Gordon; Brendle, Bruce E., Jr.; Tierney, Terrance
2000-08-01
The ground combat vehicle crew of tomorrow must be able to perform their mission more effectively and efficiently if they are to maintain dominance over ever more lethal enemy forces. Increasing performance, however, becomes even more challenging when the soldier is subject to reduced crew sizes, a never- ending requirement to adapt to ever-evolving technologies and the demand to assimilate an overwhelming array of battlefield data. This, combined with the requirement to fight with equal effectiveness at any time of the day or night in all types of weather conditions, makes it clear that this crew of tomorrow will need timely, innovative solutions to overcome this multitude of barriers if they are to achieve their objectives. To this end, the U.S. Army is pursuing advanced crew stations with human-computer interfaces that will allow the soldier to take full advantage of emerging technologies and make efficient use of the battlefield information available to him in a program entitled 'Vetronics Technology Testbed.' Two critical components of the testbed are a compliment of panoramic indirect vision displays to permit drive-by-wire and multi-function displays for managing lethality, mobility, survivability, situational awareness and command and control of the vehicle. These displays are being developed and built by Computing Devices Canada, Ltd. This paper addresses the objectives of the testbed program and the technical requirements and design of the displays.
Reijnders, L; Hoogeveen, M J
2001-07-01
The introduction of e-commerce is changing purchase and distribution patterns dramatically. One of the observed effects is that logistics become more efficient as products are directly shipped from a manufacturer or wholesaler to an end-user. Another effect is that market transparency increases, which has a downward pressure on prices of many products sold via the Internet. This article addresses the energy implications of e-commerce at the micro level. This is done by quantifying the transport related energy savings in the case of a Dutch online computer reseller and by assessing the extra energy expenditure associated with increased buying power of online buyers. It is found that energy use per article sold by the online computer reseller is lower. However, taking into account indirect effects such as increased consumer buying power, there are scenarios that lead to an overall increase in energy use.
Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozacik, Stephen
Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.
Big data computing: Building a vision for ARS information management
USDA-ARS?s Scientific Manuscript database
Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...
Storage media for computers in radiology.
Dandu, Ravi Varma
2008-11-01
The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits.
User’s Manual for the AFSATCOM Terminal Upgrades Life Cycle Cost Model. Volume I.
1981-10-01
EFAIL (I,NS)*TNB(NS) NS *I[LRU(I) + RTS(NHI(I))]*NRTS(I)*DRCT(LO(NS)) + NRTS(NHI(I))*(l - COND(I))*DAD) where TNB(NS) = total number of bases within the...required anywhere in the ATU logistics system, i.e., if it ever fails, and equals 0 otherwise. Computed by: IUT(I) = U( EFAIL (I,NS)) NS IMC = initial depot...I)*XFPR*BRCT + CIMF(NS)* EFAIL (I,B)*NBC(B)LRU(I)*FINC*FPR(I)*XFPR*CRCT B NHB(B)=NS The terms in the equation for TDFPR(I) above account for increases
Parallel approach in RDF query processing
NASA Astrophysics Data System (ADS)
Vajgl, Marek; Parenica, Jan
2017-07-01
Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.
Explore the Future: Will Books Have a Place in the Computer Classroom?
ERIC Educational Resources Information Center
Jobe, Ronald A.
The question of the place of books in a classroom using computers appears to be simple, yet it is one of vital concern to teachers. The availability of programs (few of which focus on literary appreciation), the mesmerizing qualities of the computer, its distortion of time, the increasing power of computers over teacher time, and the computer's…
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
Green Secure Processors: Towards Power-Efficient Secure Processor Design
NASA Astrophysics Data System (ADS)
Chhabra, Siddhartha; Solihin, Yan
With the increasing wealth of digital information stored on computer systems today, security issues have become increasingly important. In addition to attacks targeting the software stack of a system, hardware attacks have become equally likely. Researchers have proposed Secure Processor Architectures which utilize hardware mechanisms for memory encryption and integrity verification to protect the confidentiality and integrity of data and computation, even from sophisticated hardware attacks. While there have been many works addressing performance and other system level issues in secure processor design, power issues have largely been ignored. In this paper, we first analyze the sources of power (energy) increase in different secure processor architectures. We then present a power analysis of various secure processor architectures in terms of their increase in power consumption over a base system with no protection and then provide recommendations for designs that offer the best balance between performance and power without compromising security. We extend our study to the embedded domain as well. We also outline the design of a novel hybrid cryptographic engine that can be used to minimize the power consumption for a secure processor. We believe that if secure processors are to be adopted in future systems (general purpose or embedded), it is critically important that power issues are considered in addition to performance and other system level issues. To the best of our knowledge, this is the first work to examine the power implications of providing hardware mechanisms for security.
Schauer, Gillian L; King, Brian A; McAfee, Timothy A
2017-10-01
Approximately 70% of current (past 30-day) adult marijuana users are current tobacco users, which may complicate tobacco cessation. We assessed prevalence and trends in tobacco cessation among adult ever tobacco users, by marijuana use status. Data came from the National Survey on Drug Use and Health, a cross-sectional, nationally representative, household survey of U.S. civilians. Analyses included current, former, and never marijuana users aged≥18 reporting ever tobacco use (cigarette, cigar, chew/snuff). We computed weighted estimates (2013-2014) of current tobacco use, recent tobacco cessation (quit 30days to 12months), and sustained tobacco cessation (quit>12months) and adjusted trends in tobacco use and cessation (2005-2014) by marijuana use status. We also assessed the association between marijuana and tobacco use status. In 2013-2014, among current adult marijuana users reporting ever tobacco use, 69.1% were current tobacco users (vs. 38.5% of former marijuana users, p<0.0001, and 28.2% of never marijuana users, p<0.0001); 9.1% reported recent tobacco cessation (vs. 8.4% of former marijuana users, p<0.01, and 6.3% of never marijuana users, p<0.001), and 21.8% reported sustained tobacco cessation (vs. 53.1% of former marijuana users, p<0.01, and 65.5% of never marijuana users, p<0.0001). Between 2005 and 2014, current tobacco use declined and sustained tobacco cessation increased among all marijuana use groups. Current marijuana users who ever used tobacco had double the prevalence (vs. never-marijuana users) of current tobacco use, and significantly lower sustained abstinence. Interventions addressing tobacco cessation in the context of use of marijuana and other substances may be warranted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Power and Performance Trade-offs for Space Time Adaptive Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino
Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less
Gammaitoni, Luca; Chiuchiú, D; Madami, M; Carlotti, G
2015-06-05
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
NASA Astrophysics Data System (ADS)
Gammaitoni, Luca; Chiuchiú, D.; Madami, M.; Carlotti, G.
2015-06-01
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
Visualizing functional motions of membrane transporters with molecular dynamics simulations.
Shaikh, Saher A; Li, Jing; Enkavi, Giray; Wen, Po-Chao; Huang, Zhijian; Tajkhorshid, Emad
2013-01-29
Computational modeling and molecular simulation techniques have become an integral part of modern molecular research. Various areas of molecular sciences continue to benefit from, indeed rely on, the unparalleled spatial and temporal resolutions offered by these technologies, to provide a more complete picture of the molecular problems at hand. Because of the continuous development of more efficient algorithms harvesting ever-expanding computational resources, and the emergence of more advanced and novel theories and methodologies, the scope of computational studies has expanded significantly over the past decade, now including much larger molecular systems and far more complex molecular phenomena. Among the various computer modeling techniques, the application of molecular dynamics (MD) simulation and related techniques has particularly drawn attention in biomolecular research, because of the ability of the method to describe the dynamical nature of the molecular systems and thereby to provide a more realistic representation, which is often needed for understanding fundamental molecular properties. The method has proven to be remarkably successful in capturing molecular events and structural transitions highly relevant to the function and/or physicochemical properties of biomolecular systems. Herein, after a brief introduction to the method of MD, we use a number of membrane transport proteins studied in our laboratory as examples to showcase the scope and applicability of the method and its power in characterizing molecular motions of various magnitudes and time scales that are involved in the function of this important class of membrane proteins.
Visualizing Functional Motions of Membrane Transporters with Molecular Dynamics Simulations
2013-01-01
Computational modeling and molecular simulation techniques have become an integral part of modern molecular research. Various areas of molecular sciences continue to benefit from, indeed rely on, the unparalleled spatial and temporal resolutions offered by these technologies, to provide a more complete picture of the molecular problems at hand. Because of the continuous development of more efficient algorithms harvesting ever-expanding computational resources, and the emergence of more advanced and novel theories and methodologies, the scope of computational studies has expanded significantly over the past decade, now including much larger molecular systems and far more complex molecular phenomena. Among the various computer modeling techniques, the application of molecular dynamics (MD) simulation and related techniques has particularly drawn attention in biomolecular research, because of the ability of the method to describe the dynamical nature of the molecular systems and thereby to provide a more realistic representation, which is often needed for understanding fundamental molecular properties. The method has proven to be remarkably successful in capturing molecular events and structural transitions highly relevant to the function and/or physicochemical properties of biomolecular systems. Herein, after a brief introduction to the method of MD, we use a number of membrane transport proteins studied in our laboratory as examples to showcase the scope and applicability of the method and its power in characterizing molecular motions of various magnitudes and time scales that are involved in the function of this important class of membrane proteins. PMID:23298176
Optically Controlled Devices and Ultrafast Laser Sources for Signal Processing.
1987-06-30
A2 are input/output cavity coupling elements. C1 and C2 are coaxial cables. The resistance (R) and inductance L) provide isolation between the DC power ...the same power . 3. The continuously operating phosphate Nd:glass laser has been modelocked for the first time ever to generate 7 ps pulses. We have...media in a modelocked laser to understand the fundamental pulse generation mechanism. 2. Develop compact, high- power sources of short pulses using
Animation: What makes up the Space Launch System’s massive core stage
2017-04-24
NASA’s new rocket, the Space Launch System, will be the most powerful rocket ever built for deep-space missions. The 212-foot core stage is the largest rocket stage ever built and will fuel four RS-25 engines that will help launch SLS. This animation depicts the parts that make up the core stage and how these parts will be joined to form the entire stage. The five major parts include: the engine section, the hydrogen tank, the intertank, the liquid oxygen tank and the forward skirt.
Yet More Lessons From Complexity. Unity the key for Peace.
NASA Astrophysics Data System (ADS)
Puente, C. E.
2004-12-01
The last few decades have witnessed the development of a host of ideas aimed at understanding and predicting nature's ever present complexity. It is shown that such a work provides, through its detailed study of order and disorder, a suitable framework for visualizing the dynamics and consequences of mankind's ever present divisive traits. Specifically, this work explains how recent universal results pertaining to power-laws, self-organized criticality and space-filling transformations provide additional and pertinent reminders that point us to unity as an essential element for us to achieve peace.
Some Men's Daughters: Teaching D. H. Lawrence's "The Horse Dealer's Daughter."
ERIC Educational Resources Information Center
Mallett, Sandra-Lynne J.
"The Horse Dealer's Daughter" is usually taught as being about love's redeeming power. Usual interpretations of this story, however, ignore its title. It is also about a woman who discovers and uses her sexual power. To begin discussion, students are asked how many have ridden a horse and whether they have ever bought or sold a horse at…
Hydrogen-Oxygen PEM Regenerative Fuel Cell Development at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Bents, David J.; Scullin, Vincent J.; Chang, B. J.; Johnson, Donald W.; Garcia, Christopher P.; Jakupca, Ian J.
2006-01-01
The closed-cycle hydrogen-oxygen PEM regenerative fuel cell (RFC) at NASA Glenn Research Center has demonstrated multiple back to back contiguous cycles at rated power, and round trip efficiencies up to 52 percent. It is the first fully closed cycle regenerative fuel cell ever demonstrated (entire system is sealed: nothing enters or escapes the system other than electrical power and heat). During FY2006 the system has undergone numerous modifications and internal improvements aimed at reducing parasitic power, heat loss and noise signature, increasing its functionality as an unattended automated energy storage device, and in-service reliability. It also serves as testbed towards development of a 600 W-hr/kg flight configuration, through the successful demonstration of lightweight fuel cell and electrolyser stacks and supporting components. The RFC has demonstrated its potential as an energy storage device for aerospace solar power systems such as solar electric aircraft, lunar and planetary surface installations; any airless environment where minimum system weight is critical. Its development process continues on a path of risk reduction for the flight system NASA will eventually need for the manned lunar outpost.
NASA Astrophysics Data System (ADS)
Gramelsberger, Gabriele
The scientific understanding of atmospheric processes has been rooted in the mechanical and physical view of nature ever since dynamic meteorology gained ground in the late 19th century. Conceiving the atmosphere as a giant 'air mass circulation engine' entails applying hydro- and thermodynamical theory to the subject in order to describe the atmosphere's behaviour on small scales. But when it comes to forecasting, it turns out that this view is far too complex to be computed. The limitation of analytical methods precludes an exact solution, forcing scientists to make use of numerical simulation. However, simulation introduces two prerequisites to meteorology: First, the partitioning of the theoretical view into two parts-the large-scale behaviour of the atmosphere, and the effects of smaller-scale processes on this large-scale behaviour, so-called parametrizations; and second, the dependency on computational power in order to achieve a higher resolution. The history of today's atmospheric circulation modelling can be reconstructed as the attempt to improve the handling of these basic constraints. It can be further seen as the old schism between theory and application under new circumstances, which triggers a new discussion about the question of how processes may be conceived in atmospheric modelling.
TRIO: Burst Buffer Based I/O Orchestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Teng; Oral, H Sarp; Pritchard, Michael
The growing computing power on leadership HPC systems is often accompanied by ever-escalating failure rates. Checkpointing is a common defensive mechanism used by scientific applications for failure recovery. However, directly writing the large and bursty checkpointing dataset to parallel filesystem can incur significant I/O contention on storage servers. Such contention in turn degrades the raw bandwidth utilization of storage servers and prolongs the average job I/O time of concurrent applications. Recently burst buffer has been proposed as an intermediate layer to absorb the bursty I/O traffic from compute nodes to storage backend. But an I/O orchestration mechanism is still desiredmore » to efficiently move checkpointing data from bursty buffers to storage backend. In this paper, we propose a burst buffer based I/O orchestration framework, named TRIO, to intercept and reshape the bursty writes for better sequential write traffic to storage severs. Meanwhile, TRIO coordinates the flushing orders among concurrent burst buffers to alleviate the contention on storage server bandwidth. Our experimental results reveal that TRIO can deliver 30.5% higher bandwidth and reduce the average job I/O time by 37% on average for data-intensive applications in various checkpointing scenarios.« less
Computer-Mediated Communication (CMC) in L2 Oral Proficiency Development: A Meta-Analysis
ERIC Educational Resources Information Center
Lin, Huifen
2015-01-01
The ever growing interest in the development of foreign or second (L2) oral proficiency in a computer-mediated communication (CMC) classroom has resulted in a large body of studies looking at both the direct and indirect effects of CMC interventions on the acquisition of oral competences. The present study employed a quantitative meta-analytic…
ERIC Educational Resources Information Center
Laws, Priscilla W.; Willis, Maxine C.; Jackson, David P.; Koenig, Kathleen; Teese, Robert
2015-01-01
Ever since the first generalized computer-assisted instruction system (PLATO) was introduced over 50 years ago, educators have been adding computer-based materials to their classes. Today many textbooks have complete online versions that include video lectures and other supplements. In the past 25 years the web has fueled an explosion of online…
Making a Computer Model of the Most Complex System Ever Built - Continuum
Eastern Interconnection, all as a function of time. All told, that's about 1,000 gigabytes of data the modeling software steps forward in time, those decisions affect how the grid operates under Interconnection at five-minute intervals for one year would have required more than 400 days of computing time
Selecting Software for Students with Learning and Other Disabilities
ERIC Educational Resources Information Center
Marino, Matthew T.; Tsurusaki, Blakely K.; Basham, James D.
2011-01-01
Have you ever bought a computer program that you thought would be great for your struggling students, only to find that it did not work on your school computers, or that your students found it difficult to use? Selecting science software for students with learning and other disabilities can be a challenge. This Idea Bank provides a list of…
Geothermal Potential for China, Poland and Turkey with/Financing Workbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J G
This collection of documents presents the results of assessments of the geothermal power potential in three countries: China, Poland, and Turkey. Also included is a Geothermal Financing Workbook, which is intended to provide a comprehensive package of information on financing, financing plans, financial analysis, and financial sources for smaller geothermal resource developers. All three countries are facing ever increasing demands for power in the coming decades, but each has some barriers to fully developing existing resources. For Poland and Turkey, it is important that legislation specific to geothermal resource development be enacted. For China, a crucial step is to developmore » more detailed and accurate estimates of resource potential. All three countries could benefit from the expertise of U.S. geothermal companies, and this collection of material provides crucial information for those interested companies.« less
Mesoporous carbon incorporated metal oxide nanomaterials as supercapacitor electrodes.
Jiang, Hao; Ma, Jan; Li, Chunzhong
2012-08-08
Supercapacitors have attracted huge attention in recent years as they have the potential to satisfy the demand of both huge energy and power density in many advanced technologies. However, poor conductivity and cycling stability remains to be the major challenge for its widespread application. Various strategies have been developed for meeting the ever-increasing energy and power demands in supercapacitors. This Research News article aims to review recent progress in the development of mesoporous carbon incorporated metal oxide nanomaterials, especially metal oxide nanoparticles confined in ordered mesoporous carbon and 1D metal oxides coated with a layer of mesoporous carbon for high-performance supercapacitor applications. In addition, a recent trend in supercapacitor development - hierarchical porous graphitic carbons (HPGC) combining macroporous cores, mesoporous walls, and micropores as an excellent support for metal oxides - is also discussed.
NASA Astrophysics Data System (ADS)
Banerjee, Kakoli; Prasad, R. A.
2014-10-01
The whole gamut of Genetic data is ever increasing exponentially. The human genome in its base format occupies almost thirty terabyte of data and doubling its size every two and a half year. It is well-know that computational resources are limited. The most important resource which genetic data requires in its collection, storage and retrieval is its storage space. Storage is limited. Computational performance is also dependent on storage and execution time. Transmission capabilities are also directly dependent on the size of the data. Hence Data compression techniques become an issue of utmost importance when we confront with the task of handling such giganticdatabases like GenBank. Decompression is also an issue when such huge databases are being handled. This paper is intended not only to provide genetic data compression but also partially decompress the genetic sequences.
Signal processing: opportunities for superconductive circuits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralston, R.W.
1985-03-01
Prime motivators in the evolution of increasingly sophisticated communication and detection systems are the needs for handling ever wider signal bandwidths and higher data-processing speeds. These same needs drive the development of electronic device technology. Until recently the superconductive community has been tightly focused on digital devices for high speed computers. The purpose of this paper is to describe opportunities and challenges which exist for both analog and digital devices in a less familiar area, that of wideband signal processing. The function and purpose of analog signal-processing components, including matched filters, correlators and Fourier transformers, will be described and examplesmore » of superconductive implementations given. A canonic signal-processing system is then configured using these components and digital output circuits to highlight the important issues of dynamic range, accuracy and equivalent computation rate. (Reprints)« less
Knowing when to give up: early-rejection stratagems in ligand docking
NASA Astrophysics Data System (ADS)
Skone, Gwyn; Voiculescu, Irina; Cameron, Stephen
2009-10-01
Virtual screening is an important resource in the drug discovery community, of which protein-ligand docking is a significant part. Much software has been developed for this purpose, largely by biochemists and those in related disciplines, who pursue ever more accurate representations of molecular interactions. The resulting tools, however, are very processor-intensive. This paper describes some initial results from a project to review computational chemistry techniques for docking from a non-chemistry standpoint. An abstract blueprint for protein-ligand docking using empirical scoring functions is suggested, and this is used to discuss potential improvements. By introducing computer science tactics such as lazy function evaluation, dramatic increases to throughput can and have been realized using a real-world docking program. Naturally, they can be extended to any system that approximately corresponds to the architecture outlined.
Puerto Rico`s EcoElectrica LNG/power project marks a project financing first
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lammers, R.; Taylor, S.
1998-02-23
On Dec. 15, 1997, Enron International and Kenetech Energy Services achieved financial close on the $670 million EcoElectrica liquefied natural gas terminal and cogeneration project proposed for Puerto Rico. The project involves construction of a liquefied natural gas terminal, cogeneration plant, and desalination unit on the southern coast of Puerto Rico, in the Penuelas/Guayanilla area. EcoElectrica will include a 500-mw, combined-cycle cogeneration power plant fueled mainly by LNG imported from the 400 MMcfd Atlantic LNG project on the island of Trinidad. Achieving financial close on a project of this size is always a time-consuming matter and one with a numbermore » of challenges. These challenges were increased by the unique nature of both the project and its financing--no project financing had ever before been completed that combined an LNG terminal and power plant. The paper discusses the project, financing details and challenges, key investment considerations, and integrated project prospects.« less
Scaling of seismic memory with earthquake size
NASA Astrophysics Data System (ADS)
Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene
2012-07-01
It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.
Quality indexing with computer-aided lexicography
NASA Technical Reports Server (NTRS)
Buchan, Ronald L.
1992-01-01
Indexing with computers is a far cry from indexing with the first indexing tool, the manual card sorter. With the aid of computer-aided lexicography, both indexing and indexing tools can provide standardization, consistency, and accuracy, resulting in greater quality control than ever before. A brief survey of computer activity in indexing is presented with detailed illustrations from NASA activity. Applications from techniques mentioned, such as Retrospective Indexing (RI), can be made to many indexing systems. In addition to improving the quality of indexing with computers, the improved efficiency with which certain tasks can be done is demonstrated.
The thermodynamic efficiency of computations made in cells across the range of life
NASA Astrophysics Data System (ADS)
Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan
2017-11-01
Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Lattice Boltzmann computation of creeping fluid flow in roll-coating applications
NASA Astrophysics Data System (ADS)
Rajan, Isac; Kesana, Balashanker; Perumal, D. Arumuga
2018-04-01
Lattice Boltzmann Method (LBM) has advanced as a class of Computational Fluid Dynamics (CFD) methods used to solve complex fluid systems and heat transfer problems. It has ever-increasingly attracted the interest of researchers in computational physics to solve challenging problems of industrial and academic importance. In this current study, LBM is applied to simulate the creeping fluid flow phenomena commonly encountered in manufacturing technologies. In particular, we apply this novel method to simulate the fluid flow phenomena associated with the "meniscus roll coating" application. This prevalent industrial problem encountered in polymer processing and thin film coating applications is modelled as standard lid-driven cavity problem to which creeping flow analysis is applied. This incompressible viscous flow problem is studied in various speed ratios, the ratio of upper to lower lid speed in two different configurations of lid movement - parallel and anti-parallel wall motion. The flow exhibits interesting patterns which will help in design of roll coaters.
NASA Astrophysics Data System (ADS)
Burba, G. G.; Johnson, D.; Velgersdyk, M.; Beaty, K.; Forgione, A.; Begashaw, I.; Allyn, D.
2015-12-01
Significant increases in data generation and computing power in recent years have greatly improved spatial and temporal flux data coverage on multiple scales, from a single station to continental flux networks. At the same time, operating budgets for flux teams and stations infrastructure are getting ever more difficult to acquire and sustain. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are needed to effectively and efficiently handle the entire process. This would help maximize time dedicated to answering research questions, and minimize time and expenses spent on data processing, quality control and station management. Cross-sharing the stations with external institutions may also help leverage available funding, increase scientific collaboration, and promote data analyses and publications. FluxSuite, a new advanced tool combining hardware, software and web-service, was developed to address these specific demands. It automates key stages of flux workflow, minimizes day-to-day site management, and modernizes the handling of data flows: Each next-generation station measures all parameters needed for flux computations Field microcomputer calculates final fully-corrected flux rates in real time, including computation-intensive Fourier transforms, spectra, co-spectra, multiple rotations, stationarity, footprint, etc. Final fluxes, radiation, weather and soil data are merged into a single quality-controlled file Multiple flux stations are linked into an automated time-synchronized network Flux network manager, or PI, can see all stations in real time, including fluxes, supporting data, automated reports, and email alerts PI can assign rights, allow or restrict access to stations and data: selected stations can be shared via rights-managed access internally or with external institutions Researchers without stations could form "virtual networks" for specific projects by collaborating with PIs from different actual networks This presentation provides detailed examples of FluxSuite currently utilized by two large flux networks in China (National Academy of Sciences & Agricultural Academy of Sciences), and smaller networks with stations in the USA, Germany, Ireland, Malaysia and other locations around the globe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.
Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.« less
The Increasing Effects of Computers on Education.
ERIC Educational Resources Information Center
Gannon, John F.
Predicting that the teaching-learning process in American higher education is about to change drastically because of continuing innovations in computer-assisted technology, this paper argues that this change will be driven by inexpensive but powerful computer technology, and that it will manifest itself by reducing the traditional timing of…
Cloud computing for comparative genomics with windows azure platform.
Kim, Insik; Jung, Jae-Yoon; Deluca, Todd F; Nelson, Tristan H; Wall, Dennis P
2012-01-01
Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services.
Cloud Computing for Comparative Genomics with Windows Azure Platform
Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.
2012-01-01
Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609
Can a Crescent Mars Ever Be Seen from Earth?
ERIC Educational Resources Information Center
Lamb, John F., Jr.
1990-01-01
Described is an activity that incorporates a computer, geometry, algebra, trigonometry, and calculus to answer questions about the planet Mars. A possible crescent of Mars is compared to those of Venus and Mercury. (KR)
Calibrating Parameters of Power System Stability Models using Advanced Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Diao, Ruisheng; Li, Yuanyuan
With the ever increasing penetration of renewable energy, smart loads, energy storage, and new market behavior, today’s power grid becomes more dynamic and stochastic, which may invalidate traditional study assumptions and pose great operational challenges. Thus, it is of critical importance to maintain good-quality models for secure and economic planning and real-time operation. Following the 1996 Western Systems Coordinating Council (WSCC) system blackout, North American Electric Reliability Corporation (NERC) and Western Electricity Coordinating Council (WECC) in North America enforced a number of policies and standards to guide the power industry to periodically validate power grid models and calibrate poor parametersmore » with the goal of building sufficient confidence in model quality. The PMU-based approach using online measurements without interfering with the operation of generators provides a low-cost alternative to meet NERC standards. This paper presents an innovative procedure and tool suites to validate and calibrate models based on a trajectory sensitivity analysis method and an advanced ensemble Kalman filter algorithm. The developed prototype demonstrates excellent performance in identifying and calibrating bad parameters of a realistic hydro power plant against multiple system events.« less
High-power disk lasers: advances and applications
NASA Astrophysics Data System (ADS)
Havrilla, David; Ryba, Tracey; Holzer, Marco
2012-03-01
Though the genesis of the disk laser concept dates to the early 90's, the disk laser continues to demonstrate the flexibility and the certain future of a breakthrough technology. On-going increases in power per disk, and improvements in beam quality and efficiency continue to validate the genius of the disk laser concept. As of today, the disk principle has not reached any fundamental limits regarding output power per disk or beam quality, and offers numerous advantages over other high power resonator concepts, especially over monolithic architectures. With about 2,000 high power disk lasers installations, and a demand upwards of 1,000 lasers per year, the disk laser has proven to be a robust and reliable industrial tool. With advancements in running cost, investment cost and footprint, manufacturers continue to implement disk laser technology with more vigor than ever. This paper will explain recent advances in disk laser technology and process relevant features of the laser, like pump diode arrangement, resonator design and integrated beam guidance. In addition, advances in applications in the thick sheet area and very cost efficient high productivity applications like remote welding, remote cutting and cutting of thin sheets will be discussed.
Computational Planning in Facial Surgery.
Zachow, Stefan
2015-10-01
This article reflects the research of the last two decades in computational planning for cranio-maxillofacial surgery. Model-guided and computer-assisted surgery planning has tremendously developed due to ever increasing computational capabilities. Simulators for education, planning, and training of surgery are often compared with flight simulators, where maneuvers are also trained to reduce a possible risk of failure. Meanwhile, digital patient models can be derived from medical image data with astonishing accuracy and thus can serve for model surgery to derive a surgical template model that represents the envisaged result. Computerized surgical planning approaches, however, are often still explorative, meaning that a surgeon tries to find a therapeutic concept based on his or her expertise using computational tools that are mimicking real procedures. Future perspectives of an improved computerized planning may be that surgical objectives will be generated algorithmically by employing mathematical modeling, simulation, and optimization techniques. Planning systems thus act as intelligent decision support systems. However, surgeons can still use the existing tools to vary the proposed approach, but they mainly focus on how to transfer objectives into reality. Such a development may result in a paradigm shift for future surgery planning. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Suicidal Behaviors among Adolescents in Juvenile Detention: Role of Adverse Life Experiences
Bhatta, Madhav P.; Jefferis, Eric; Kavadas, Angela; Alemagno, Sonia A.; Shaffer-King, Peggy
2014-01-01
Purpose The purpose of this study was to assess the influence of multiple adverse life experiences (sexual abuse, homelessness, running away, and substance abuse in the family) on suicide ideation and suicide attempt among adolescents at an urban juvenile detention facility in the United States. Materials and Methods The study sample included a total of 3,156 adolescents processed at a juvenile detention facility in an urban area in Ohio between 2003 and 2007. The participants, interacting anonymously with a voice enabled computer, self-administered a questionnaire with 100 items related to health risk behaviors. Results Overall 19.0% reported ever having thought about suicide (suicide ideation) and 11.9% reported ever having attempted suicide (suicide attempt). In the multivariable logistic regression analysis those reporting sexual abuse (Odds Ratio = 2.75; 95% confidence interval = 2.08–3.63) and homelessness (1.51; 1.17–1.94) were associated with increased odds of suicide ideation, while sexual abuse (3.01; 2.22–4.08), homelessness (1.49; 1.12–1.98), and running away from home (1.38; 1.06–1.81) were associated with increased odds of a suicide attempt. Those experiencing all four adverse events were 7.81 times more likely (2.41–25.37) to report having ever attempted suicide than those who experienced none of the adverse events. Conclusions Considering the high prevalence of adverse life experiences and their association with suicidal behaviors in detained adolescents, these factors should not only be included in the suicide screening tools at the intake and during detention, but should also be used for the intervention programming for suicide prevention. PMID:24586756
Audie, J; Boyd, C
2010-01-01
The case for peptide-based drugs is compelling. Due to their chemical, physical and conformational diversity, and relatively unproblematic toxicity and immunogenicity, peptides represent excellent starting material for drug discovery. Nature has solved many physiological and pharmacological problems through the use of peptides, polypeptides and proteins. If nature could solve such a diversity of challenging biological problems through the use of peptides, it seems reasonable to infer that human ingenuity will prove even more successful. And this, indeed, appears to be the case, as a number of scientific and methodological advances are making peptides and peptide-based compounds ever more promising pharmacological agents. Chief among these advances are powerful chemical and biological screening technologies for lead identification and optimization, methods for enhancing peptide in vivo stability, bioavailability and cell-permeability, and new delivery technologies. Other advances include the development and experimental validation of robust computational methods for peptide lead identification and optimization. Finally, scientific analysis, biology and chemistry indicate the prospect of designing relatively small peptides to therapeutically modulate so-called 'undruggable' protein-protein interactions. Taken together a clear picture is emerging: through the synergistic use of the scientific imagination and the computational, chemical and biological methods that are currently available, effective peptide therapeutics for novel targets can be designed that surpass even the proven peptidic designs of nature.
Micro-CT of rodents: state-of-the-art and future perspectives
Clark, D. P.; Badea, C. T.
2014-01-01
Micron-scale computed tomography (micro-CT) is an essential tool for phenotyping and for elucidating diseases and their therapies. This work is focused on preclinical micro-CT imaging, reviewing relevant principles, technologies, and applications. Commonly, micro-CT provides high-resolution anatomic information, either on its own or in conjunction with lower-resolution functional imaging modalities such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). More recently, however, advanced applications of micro-CT produce functional information by translating clinical applications to model systems (e.g. measuring cardiac functional metrics) and by pioneering new ones (e.g. measuring tumor vascular permeability with nanoparticle contrast agents). The primary limitations of micro-CT imaging are the associated radiation dose and relatively poor soft tissue contrast. We review several image reconstruction strategies based on iterative, statistical, and gradient sparsity regularization, demonstrating that high image quality is achievable with low radiation dose given ever more powerful computational resources. We also review two contrast mechanisms under intense development. The first is spectral contrast for quantitative material discrimination in combination with passive or actively targeted nanoparticle contrast agents. The second is phase contrast which measures refraction in biological tissues for improved contrast and potentially reduced radiation dose relative to standard absorption imaging. These technological advancements promise to develop micro-CT into a commonplace, functional and even molecular imaging modality. PMID:24974176
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Interpreting signals from astrophysical transient experiments
O’Brien, Paul T.; Smartt, Stephen J.
2013-01-01
Time-domain astronomy has come of age with astronomers now able to monitor the sky at high cadence, both across the electromagnetic spectrum and using neutrinos and gravitational waves. The advent of new observing facilities permits new science, but the ever-increasing throughput of facilities demands efficient communication of coincident detections and better subsequent coordination among the scientific community so as to turn detections into scientific discoveries. To discuss the revolution occurring in our ability to monitor the Universe and the challenges it brings, on 25–26 April 2012, a group of scientists from observational and theoretical teams studying transients met with representatives of the major international transient observing facilities at the Kavli Royal Society International Centre, UK. This immediately followed the Royal Society Discussion Meeting ‘New windows on transients across the Universe’ held in London. Here, we present a summary of the Kavli meeting at which the participants discussed the science goals common to the transient astronomy community and analysed how to better meet the challenges ahead as ever more powerful observational facilities come on stream. PMID:23630383
New tools for classification and monitoring of autoimmune diseases
Maecker, Holden T.; Lindstrom, Tamsin M.; Robinson, William H.; Utz, Paul J.; Hale, Matthew; Boyd, Scott D.; Shen-Orr, Shai S.; Fathman, C. Garrison
2012-01-01
Rheumatologists see patients with a range of autoimmune diseases. Phenotyping these diseases for diagnosis, prognosis and selection of therapies is an ever increasing problem. Advances in multiplexed assay technology at the gene, protein, and cellular level have enabled the identification of `actionable biomarkers'; that is, biological metrics that can inform clinical practice. Not only will such biomarkers yield insight into the development, remission, and exacerbation of a disease, they will undoubtedly improve diagnostic sensitivity and accuracy of classification, and ultimately guide treatment. This Review provides an introduction to these powerful technologies that could promote the identification of actionable biomarkers, including mass cytometry, protein arrays, and immunoglobulin and T-cell receptor high-throughput sequencing. In our opinion, these technologies should become part of routine clinical practice for the management of autoimmune diseases. The use of analytical tools to deconvolve the data obtained from use of these technologies is also presented here. These analyses are revealing a more comprehensive and interconnected view of the immune system than ever before and should have an important role in directing future treatment approaches for autoimmune diseases. PMID:22647780
Rocha-Martin, Javier; Harrington, Catriona; Dobson, Alan D.W.; O’Gara, Fergal
2014-01-01
Marine microorganisms continue to be a source of structurally and biologically novel compounds with potential use in the biotechnology industry. The unique physiochemical properties of the marine environment (such as pH, pressure, temperature, osmolarity) and uncommon functional groups (such as isonitrile, dichloroimine, isocyanate, and halogenated functional groups) are frequently found in marine metabolites. These facts have resulted in the production of bioactive substances with different properties than those found in terrestrial habitats. In fact, the marine environment contains a relatively untapped reservoir of bioactivity. Recent advances in genomics, metagenomics, proteomics, combinatorial biosynthesis, synthetic biology, screening methods, expression systems, bioinformatics, and the ever increasing availability of sequenced genomes provides us with more opportunities than ever in the discovery of novel bioactive compounds and biocatalysts. The combination of these advanced techniques with traditional techniques, together with the use of dereplication strategies to eliminate known compounds, provides a powerful tool in the discovery of novel marine bioactive compounds. This review outlines and discusses the emerging strategies for the biodiscovery of these bioactive compounds. PMID:24918453
Second order nonlinear QED processes in ultra-strong laser fields
NASA Astrophysics Data System (ADS)
Mackenroth, Felix
2017-10-01
In the interaction of ultra-intense laser fields with matter the ever increasing peak laser intensities render nonlinear QED effects ever more important. For long, ultra-intense laser pulses scattering large systems, like a macroscopic plasma, the interaction time can be longer than the scattering time, leading to multiple scatterings. These are usually approximated as incoherent cascades of single-vertex processes. Under certain conditions, however, this common cascade approximation may be insufficient, as it disregards several effects such as coherent processes, quantum interferences or pulse shape effects. Quantifying deviations of the full amplitude of multiple scatterings from the commonly employed cascade approximations is a formidable, yet unaccomplished task. In this talk we are going to discuss how to compute second order nonlinear QED amplitudes and relate them to the conventional cascade approximation. We present examples for typical second order processes and benchmark the full result against common approximations. We demonstrate that the approximation of multiple nonlinear QED scatterings as a cascade of single interactions has certain limitations and discuss these limits in light of upcoming experimental tests.
Proceedings of the Sixth NASA Langley Formal Methods (LFM) Workshop
NASA Technical Reports Server (NTRS)
Rozier, Kristin Yvonne (Editor)
2008-01-01
Today's verification techniques are hard-pressed to scale with the ever-increasing complexity of safety critical systems. Within the field of aeronautics alone, we find the need for verification of algorithms for separation assurance, air traffic control, auto-pilot, Unmanned Aerial Vehicles (UAVs), adaptive avionics, automated decision authority, and much more. Recent advances in formal methods have made verifying more of these problems realistic. Thus we need to continually re-assess what we can solve now and identify the next barriers to overcome. Only through an exchange of ideas between theoreticians and practitioners from academia to industry can we extend formal methods for the verification of ever more challenging problem domains. This volume contains the extended abstracts of the talks presented at LFM 2008: The Sixth NASA Langley Formal Methods Workshop held on April 30 - May 2, 2008 in Newport News, Virginia, USA. The topics of interest that were listed in the call for abstracts were: advances in formal verification techniques; formal models of distributed computing; planning and scheduling; automated air traffic management; fault tolerance; hybrid systems/hybrid automata; embedded systems; safety critical applications; safety cases; accident/safety analysis.
Seeking Clocks in the Clouds: Nonlinearity and American Precision Air Power
2006-01-01
1995); and John A. Warden, III, "Employing Air Power in the Twenty-first Century," in Richard H. Shultz, Jr. and Robert L. Pfaltzgraff, Jr., eds...made warfare much more certain and precise than was ever thought possible." Richard I. Dunn, III, From Gettysburg to the Gulf and Beyond: Coping With... Richard P. Hallion, Precision Guided Munitions and the New Era of Warfare, RAAF Fairbarn, Australia: Air Power Studies Centre, 1995), 3-4. Online at
Silicon microdisk-based full adders for optical computing.
Ying, Zhoufeng; Wang, Zheng; Zhao, Zheng; Dhar, Shounak; Pan, David Z; Soref, Richard; Chen, Ray T
2018-03-01
Due to the projected saturation of Moore's law, as well as the drastically increasing trend of bandwidth with lower power consumption, silicon photonics has emerged as one of the most promising alternatives that has attracted a lasting interest due to the accessibility and maturity of ultra-compact passive and active integrated photonic components. In this Letter, we demonstrate a ripple-carry electro-optic 2-bit full adder using microdisks, which replaces the core part of an electrical full adder by optical counterparts and uses light to carry signals from one bit to the next with high bandwidth and low power consumption per bit. All control signals of the operands are applied simultaneously within each clock cycle. Thus, the severe latency issue that accumulates as the size of the full adder increases can be circumvented, allowing for an improvement in computing speed and a reduction in power consumption. This approach paves the way for future high-speed optical computing systems in the post-Moore's law era.
A SOCIO-ECONOMIST LOOKS AT THE CURRENT VALUES AND CHANGING NEEDS OF YOUTH. FINAL DRAFT.
ERIC Educational Resources Information Center
THEOBALD, ROBERT
MAN HAS ACHIEVED THE POWER TO CREATE AN ENVIRONMENT SUITED TO HIS NEEDS. THIS POWER COMES FROM DEVELOPMENTS IN THE UTILIZATION OF ENERGY, ADVANCEMENTS IN CHEMISTRY, AN INCREASE IN SCIENTIFIC PROBLEM SOLVING ABILITY AND COMPUTER TECHNOLOGY. THESE SOURCES OF POWER RESULT IN THE DRIVE TOWARD THE DEVELOPMENT OF DESTRUCTIVE POWER, THE CAPABILITY OF…
NASA Astrophysics Data System (ADS)
Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats
2014-06-01
Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt
Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less
Sub-Second Parallel State Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.
This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly statemore » estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.« less
ERIC Educational Resources Information Center
Nixon, William A.
In 1800 the U.S. democracy faced a challenge when Republican Thomas Jefferson defeated Federalist President John Adams. The Federalists handed over the reins of power to their hated rivals, setting a precedent that has guided U.S. politics ever since. This precedent established the tradition of the peaceful transfer of power. The bicentennial of…
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
... in coffee, but watch out for it in energy drinks, soft drinks, iced teas, and over-the-counter medications. First-Year Fitness Staying fit is easier than ever ... out. That means effort, energy, and exercise to keep you powered up during ...
Implementing change in health professions education: stakeholder analysis and coalition building.
Baum, Karyn D; Resnik, Cheryl D; Wu, Jennifer J; Roey, Steven C
2007-01-01
The challenges facing the health sciences education fields are more evident than ever. Professional health sciences educators have more demands on their time, more knowledge to manage, and ever-dwindling sources of financial support. Change is often necessary to either keep programs viable or meet the changing needs of health education. This article outlines a simple but powerful three-step tool to help educators become successful agents of change. Through the application of principles well known and widely used in business management, readers will understand the concepts behind stakeholder analysis and coalition building. These concepts are part of a powerful tool kit that educators need in order to become effective agents of change in the health sciences environment. Using the example of curriculum change at a school of veterinary medicine, we will outline the three steps involved, from stakeholder identification and analysis to building and managing coalitions for change.
MIT Lincoln Laboratory Takes the Mystery Out of Supercomupting
2017-01-18
analysis, designing sensors, and developing algorithms. In 2008, the Lincoln demonstrated the largest single problem ever run on a computer using ... computation . As we design and prototype these devices, the use of leading–edge engineering practices have become the de facto standard. This includes...MIT Lincoln Laboratory Takes the Mystery Out of Supercomputing By Dr. Jeremy Kepner 1 The introduction of multicore and manycore processors
A Computer Vision System forLocating and Identifying Internal Log Defects Using CT Imagery
Dongping Zhu; Richard W. Conners; Frederick Lamb; Philip A. Araman
1991-01-01
A number of researchers have shown the ability of magnetic resonance imaging (MRI) and computer tomography (CT) imaging to detect internal defects in logs. However, if these devices are ever to play a role in the forest products industry, automatic methods for analyzing data from these devices must be developed. This paper reports research aimed at developing a...
Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change
NASA Astrophysics Data System (ADS)
Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.
2017-12-01
Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, an urban minority serving institution in central Brooklyn. Supported by NSF Award AGS-1359293 And NASA Award NNX17AC81G.
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Adaptive Wavelet Modeling of Geophysical Data
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.
2009-12-01
Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
Students' Perceptions of and Experiences With Educational Technology: A Survey.
Royal, Kenneth; Hedgpeth, Mari-Wells; McWhorter, Dan
2016-05-18
It is generally assumed that incoming students in medical education programs will be better equipped for the "digital age" given their younger age and an educational upbringing in which technology was seemingly omnipresent. In particular, many assume that today's medical students are more likely to hold positive attitudes and increased comfortability with technology and possess greater information technology (IT) skills. The purpose of this study was to compare responses of incoming veterinary medical students to a series of IT-related questions contained in a common questionnaire over the course of a 10-year period (2005-2015) to discern whether students' attitudes have improved and uses and comfortability with technology have increased as anticipated. A survey measuring attitudes and preferences, computing experience, and technology ownership was administered each year for the past 10 years to incoming veterinary medical students at a large veterinary school in the United States. Students' responses to survey items were compared at 3 data points (2005, 2010, and 2015). Today's incoming veterinary medical students tend to indicate the same desire to improve skills using spreadsheets and web page design as incoming students from 10 years ago. It seems that despite technological advances and increased exposure to such applications and skills, there remains a challenge for students to "keep up" with the ever evolving technology. Moreover, although students continue to report they are very comfortable with using a computer (and related devices), many use their computers as typewriters or word processors, as opposed to a means for performing more advanced computing functions. In general, today's medical students are not expert computer users as many assume. Despite an upbringing in a digitized world, many students still lack many basic computing skills.
Students' Perceptions of and Experiences With Educational Technology: A Survey
Hedgpeth, Mari-Wells; McWhorter, Dan
2016-01-01
Background It is generally assumed that incoming students in medical education programs will be better equipped for the “digital age” given their younger age and an educational upbringing in which technology was seemingly omnipresent. In particular, many assume that today's medical students are more likely to hold positive attitudes and increased comfortability with technology and possess greater information technology (IT) skills. Objective The purpose of this study was to compare responses of incoming veterinary medical students to a series of IT-related questions contained in a common questionnaire over the course of a 10-year period (2005-2015) to discern whether students’ attitudes have improved and uses and comfortability with technology have increased as anticipated. Methods A survey measuring attitudes and preferences, computing experience, and technology ownership was administered each year for the past 10 years to incoming veterinary medical students at a large veterinary school in the United States. Students' responses to survey items were compared at 3 data points (2005, 2010, and 2015). Results Today's incoming veterinary medical students tend to indicate the same desire to improve skills using spreadsheets and web page design as incoming students from 10 years ago. It seems that despite technological advances and increased exposure to such applications and skills, there remains a challenge for students to “keep up” with the ever evolving technology. Moreover, although students continue to report they are very comfortable with using a computer (and related devices), many use their computers as typewriters or word processors, as opposed to a means for performing more advanced computing functions. Conclusions In general, today's medical students are not expert computer users as many assume. Despite an upbringing in a digitized world, many students still lack many basic computing skills. PMID:27731853
COMPUTER TECHNOLOGY AND SOCIAL CHANGE,
This paper presents a discussion of the social , political, economic and psychological problems associated with the rapid growth and development of...public officials and responsible groups is required to increase public understanding of the computer as a powerful tool, to select appropriate
Computers in imaging and health care: now and in the future.
Arenson, R L; Andriole, K P; Avrin, D E; Gould, R G
2000-11-01
Early picture archiving and communication systems (PACS) were characterized by the use of very expensive hardware devices, cumbersome display stations, duplication of database content, lack of interfaces to other clinical information systems, and immaturity in their understanding of the folder manager concepts and workflow reengineering. They were implemented historically at large academic medical centers by biomedical engineers and imaging informaticists. PACS were nonstandard, home-grown projects with mixed clinical acceptance. However, they clearly showed the great potential for PACS and filmless medical imaging. Filmless radiology is a reality today. The advent of efficient softcopy display of images provides a means for dealing with the ever-increasing number of studies and number of images per study. Computer power has increased, and archival storage cost has decreased to the extent that the economics of PACS is justifiable with respect to film. Network bandwidths have increased to allow large studies of many megabytes to arrive at display stations within seconds of examination completion. PACS vendors have recognized the need for efficient workflow and have built systems with intelligence in the management of patient data. Close integration with the hospital information system (HIS)-radiology information system (RIS) is critical for system functionality. Successful implementation of PACS requires integration or interoperation with hospital and radiology information systems. Besides the economic advantages, secure rapid access to all clinical information on patients, including imaging studies, anytime and anywhere, enhances the quality of patient care, although it is difficult to quantify. Medical image management systems are maturing, providing access outside of the radiology department to images and clinical information throughout the hospital or the enterprise via the Internet. Small and medium-sized community hospitals, private practices, and outpatient centers in rural areas will begin realizing the benefits of PACS already realized by the large tertiary care academic medical centers and research institutions. Hand-held devices and the Worldwide Web are going to change the way people communicate and do business. The impact on health care will be huge, including radiology. Computer-aided diagnosis, decision support tools, virtual imaging, and guidance systems will transform our practice as value-added applications utilizing the technologies pushed by PACS development efforts. Outcomes data and the electronic medical record (EMR) will drive our interactions with referring physicians and we expect the radiologist to become the informaticist, a new version of the medical management consultant.
Parallel Calculations in LS-DYNA
NASA Astrophysics Data System (ADS)
Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey
2017-11-01
Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.
Discovery of Nine Gamma-Ray Pulsars in Fermi-Lat Data Using a New Blind Search Method
NASA Technical Reports Server (NTRS)
Celik-Tinmaz, Ozlem; Ferrara, E. C.; Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.;
2011-01-01
We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient, and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs Jl803-2149 and J2111+4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 x 10(exp 35) ergs per second and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J010622+3749, Jl620-4927, Jl746-3239, J2028+3332,J2030+4415, J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| greater than 10 degrees). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2x 10(exp 11)G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 x l0(exp 33) erg per second) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.
Pletsch, H. J.; Guillemot, L.; Allen, B.; ...
2011-12-20
We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient, and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, andmore » characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs J1803–2149 and J2111+4606, are young and energetic Galactic-plane pulsars (spin-down power above 6X10 35 erg s -1 and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J0622+3749, J1620–4927, J1746–3239, J2028+3332, J2030+4415, J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (jbj > 10°). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2X10 11G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3X10 33 erg s -1) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pletsch, H. J.; Allen, B.; Aulbert, C.
2012-01-10
We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative, and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, andmore » characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs J1803-2149 and J2111+ 4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 Multiplication-Sign 10{sup 35} erg s{sup -1} and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J0622+3749, J1620-4927, J1746-3239, J2028+3332, J2030+4415, and J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| > 10 Degree-Sign ). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2 Multiplication-Sign 10{sup 11} G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 Multiplication-Sign 10{sup 33} erg s{sup -1}) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.« less
Developing screening services for colorectal cancer on Android smartphones.
Wu, Hui-Ching; Chang, Chiao-Jung; Lin, Chun-Che; Tsai, Ming-Chang; Chang, Che-Chia; Tseng, Ming-Hseng
2014-08-01
Colorectal cancer (CRC) is an important health problem in Western countries and also in Asia. It is the third leading cause of cancer deaths in both men and women in Taiwan. According to the well-known adenoma-to-carcinoma sequence, the majority of CRC develops from colorectal adenomatous polyps. This concept provides the rationale for screening and prevention of CRC. Removal of colorectal adenoma could reduce the mortality and incidence of CRC. Mobile phones are now playing an ever more crucial role in people's daily lives. The latest generation of smartphones is increasingly viewed as hand-held computers rather than as phones, because of their powerful on-board computing capability, capacious memories, large screens, and open operating systems that encourage development of applications (apps). If we can detect the potential CRC patients early and offer them appropriate treatments and services, this would not only promote the quality of life, but also reduce the possible serious complications and medical costs. In this study, an intelligent CRC screening app on Android™ (Google™, Mountain View, CA) smartphones has been developed based on a data mining approach using decision tree algorithms. For comparison, the stepwise backward multivariate logistic regression model and the fecal occult blood test were also used. Compared with the stepwise backward multivariate logistic regression model and the fecal occult blood test, the proposed app system not only provides an easy and efficient way to quickly detect high-risk groups of potential CRC patients, but also brings more information about CRC to customer-oriented services. We developed and implemented an app system on Android platforms for ubiquitous healthcare services for CRC screening. It can assist people in achieving early screening, diagnosis, and treatment purposes, prevent the occurrence of complications, and thus reach the goal of preventive medicine.
Petrovici, Mihai A.; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks. PMID:25303102
Petrovici, Mihai A; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
NASA Astrophysics Data System (ADS)
Kortagere, Sandhya; Welsh, William J.
2006-12-01
G-protein coupled receptors (GPCRs) comprise a large superfamily of proteins that are targets for nearly 50% of drugs in clinical use today. In the past, the use of structure-based drug design strategies to develop better drug candidates has been severely hampered due to the absence of the receptor's three-dimensional structure. However, with recent advances in molecular modeling techniques and better computing power, atomic level details of these receptors can be derived from computationally derived molecular models. Using information from these models coupled with experimental evidence, it has become feasible to build receptor pharmacophores. In this study, we demonstrate the use of the Hybrid Structure Based (HSB) method that can be used effectively to screen and identify prospective ligands that bind to GPCRs. Essentially; this multi-step method combines ligand-based methods for building enriched libraries of small molecules and structure-based methods for screening molecules against the GPCR target. The HSB method was validated to identify retinal and its analogues from a random dataset of ˜300,000 molecules. The results from this study showed that the 9 top-ranking molecules are indeed analogues of retinal. The method was also tested to identify analogues of dopamine binding to the dopamine D2 receptor. Six of the ten top-ranking molecules are known analogues of dopamine including a prodrug, while the other thirty-four molecules are currently being tested for their activity against all dopamine receptors. The results from both these test cases have proved that the HSB method provides a realistic solution to bridge the gap between the ever-increasing demand for new drugs to treat psychiatric disorders and the lack of efficient screening methods for GPCRs.
Issues in undergraduate education in computational science and high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchioro, T.L. II; Martin, D.
1994-12-31
The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computationalmore » science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?« less
GSM base station electromagnetic radiation and oxidative stress in rats.
Yurekli, Ali Ihsan; Ozkan, Mehmed; Kalkan, Tunaya; Saybasili, Hale; Tuncel, Handan; Atukeren, Pinar; Gumustas, Koray; Seker, Selim
2006-01-01
The ever increasing use of cellular phones and the increasing number of associated base stations are becoming a widespread source of nonionizing electromagnetic radiation. Some biological effects are likely to occur even at low-level EM fields. In this study, a gigahertz transverse electromagnetic (GTEM) cell was used as an exposure environment for plane wave conditions of far-field free space EM field propagation at the GSM base transceiver station (BTS) frequency of 945 MHz, and effects on oxidative stress in rats were investigated. When EM fields at a power density of 3.67 W/m2 (specific absorption rate = 11.3 mW/kg), which is well below current exposure limits, were applied, MDA (malondialdehyde) level was found to increase and GSH (reduced glutathione) concentration was found to decrease significantly (p < 0.0001). Additionally, there was a less significant (p = 0.0190) increase in SOD (superoxide dismutase) activity under EM exposure.
Recent advancements towards green optical networks
NASA Astrophysics Data System (ADS)
Davidson, Alan; Glesk, Ivan; Buis, Adrianus; Wang, Junjia; Chen, Lawrence
2014-12-01
Recent years have seen a rapid growth in demand for ultra high speed data transmission with end users expecting fast, high bandwidth network access. With this rapid growth in demand, data centres are under pressure to provide ever increasing data rates through their networks and at the same time improve the quality of data handling in terms of reduced latency, increased scalability and improved channel speed for users. However as data rates increase, present technology based on well-established CMOS technology is becoming increasingly difficult to scale and consequently data networks are struggling to satisfy current network demand. In this paper the interrelated issues of electronic scalability, power consumption, limited copper interconnect bandwidth and the limited speed of CMOS electronics will be explored alongside the tremendous bandwidth potential of optical fibre based photonic networks. Some applications of photonics to help alleviate the speed and latency in data networks will be discussed.
High-power laser with Nd:YAG single-crystal fiber grown by the micro-pulling-down technique
NASA Astrophysics Data System (ADS)
Didierjean, Julien; Castaing, Marc; Balembois, François; Georges, Patrick; Perrodin, Didier; Fourmigué, Jean Marie; Lebbou, Kherreddine; Brenier, Alain; Tillement, Olivier
2006-12-01
We present optical characterization and laser results achieved with single-crystal fibers directly grown by the micro-pulling-down technique. We investigate the spectroscopic and optical quality of the fiber, and we present the first laser results. We achieved a cw laser power of 10 W at 1064 nm for an incident pump power of 60 W at 808 nm and 360 kW peak power for 12 ns pulses at 1 kHz in the Q-switched regime. It is, to the best of our knowledge, the highest laser power ever achieved with directly grown single-crystal fibers.
Optically switched magnetism in photovoltaic perovskite CH3NH3(Mn:Pb)I3
Náfrádi, B.; Szirmai, P.; Spina, M.; Lee, H.; Yazyev, O. V.; Arakcheeva, A.; Chernyshov, D.; Gibert, M.; Forró, L.; Horváth, E.
2016-01-01
The demand for ever-increasing density of information storage and speed of manipulation boosts an intense search for new magnetic materials and novel ways of controlling the magnetic bit. Here, we report the synthesis of a ferromagnetic photovoltaic CH3NH3(Mn:Pb)I3 material in which the photo-excited electrons rapidly melt the local magnetic order through the Ruderman–Kittel–Kasuya–Yosida interactions without heating up the spin system. Our finding offers an alternative, very simple and efficient way of optical spin control, and opens an avenue for applications in low-power, light controlling magnetic devices. PMID:27882917
Absorbance Based Light Emitting Diode Optical Sensors and Sensing Devices
O'Toole, Martina; Diamond, Dermot
2008-01-01
The ever increasing demand for in situ monitoring of health, environment and security has created a need for reliable, miniaturised sensing devices. To achieve this, appropriate analytical devices are required that possess operating characteristics of reliability, low power consumption, low cost, autonomous operation capability and compatibility with wireless communications systems. The use of light emitting diodes (LEDs) as light sources is one strategy, which has been successfully applied in chemical sensing. This paper summarises the development and advancement of LED based chemical sensors and sensing devices in terms of their configuration and application, with the focus on transmittance and reflectance absorptiometric measurements. PMID:27879829
The liquid droplet radiator: Status of development
NASA Astrophysics Data System (ADS)
Persson, J.
1991-12-01
The ever greater amounts of power to be dissipated onboard future spacecraft, together with their limited external dimensions, will make it increasingly difficult to use conventional radiator technology without imposing a severe mass penalty. Hunting for lightweight alternatives to current heat rejection systems has become a matter of growing urgency, which explains the great interest that the Liquid Droplet Radiator (LDR) has attracted. Tradeoff analyses indicate that an LDR may be as much as an order of magnitude lighter than a comparable conventional radiator. A literature study examining the progress of the LDR research and some of its possible applications is reviewed. An investigation of the LDR heat rejection capability is presented.
Absorbance Based Light Emitting Diode Optical Sensors and Sensing Devices.
O'Toole, Martina; Diamond, Dermot
2008-04-07
The ever increasing demand for in situ monitoring of health, environment and security has created a need for reliable, miniaturised sensing devices. To achieve this, appropriate analytical devices are required that possess operating characteristics of reliability, low power consumption, low cost, autonomous operation capability and compatibility with wireless communications systems. The use of light emitting diodes (LEDs) as light sources is one strategy, which has been successfully applied in chemical sensing. This paper summarises the development and advancement of LED based chemical sensors and sensing devices in terms of their configuration and application, with the focus on transmittance and reflectance absorptiometric measurements.
Chen, Wen; Zhou, Fangjing; Hall, Brian J; Tucker, Joseph D; Latkin, Carl; Renzaho, Andre M N; Ling, Li
2017-01-01
Achieving high coverage of HIV testing services is critical in many health systems, especially where HIV testing services remain centralized and inconvenient for many. As a result, planning the optimal spatial distribution of HIV testing sites is increasingly important. We aimed to assess the relationship between geographic distance and uptake of HIV testing services among the general population in Guangzhou, China. Utilizing spatial epidemiological methods and stratified household random sampling, we studied 666 adults aged 18-59. Computer-assisted interviews assessed self-reported HIV testing history. Spatial scan statistic assessed the clustering of participants who have ever been tested for HIV, and two-level logistic regression models assessed the association between uptake of HIV testing and the mean driving distance from the participant's residence to all HIV testing sites in the research sites. The percentage of participants who have ever been tested for HIV was 25.2% (168/666, 95%CI: 21.9%, 28.5%), and the majority (82.7%) of participants tested for HIV in Centres for Disease Control and Prevention, public hospitals or STIs clinics. None reported using self-testing. Spatial clustering analyses found a hotspot included 48 participants who have ever been tested for HIV and 25.8 expected cases (Rate Ratio = 1.86, P = 0.002). Adjusted two-level logistic regression found an inverse relationship between geographic distance (kilometers) and ever being tested for HIV (aOR = 0.90, 95%CI: 0.84, 0.96). Married or cohabiting participants (aOR = 2.14, 95%CI: 1.09, 4.20) and those with greater social support (aOR = 1.04, 95%CI: 1.01, 1.07) were more likely to be tested for HIV. Our findings underscore the importance of considering the geographical distribution of HIV testing sites to increase testing. In addition, expanding HIV testing coverage by introducing non-facility based HIV testing services and self-testing might be useful to achieve the goal that 90% of people living with HIV knowing their HIV status by the year 2020.
Artificial intelligence and robotics in high throughput post-genomics.
Laghaee, Aroosha; Malcolm, Chris; Hallam, John; Ghazal, Peter
2005-09-15
The shift of post-genomics towards a systems approach has offered an ever-increasing role for artificial intelligence (AI) and robotics. Many disciplines (e.g. engineering, robotics, computer science) bear on the problem of automating the different stages involved in post-genomic research with a view to developing quality assured high-dimensional data. We review some of the latest contributions of AI and robotics to this end and note the limitations arising from the current independent, exploratory way in which specific solutions are being presented for specific problems without regard to how these could be eventually integrated into one comprehensible integrated intelligent system.
Storage media for computers in radiology
Dandu, Ravi Varma
2008-01-01
The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits. PMID:19774182
Bioinformatics in translational drug discovery.
Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G
2017-08-31
Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).
Cloud-Based Applications for Organizing and Reviewing Plastic Surgery Content
Luan, Anna; Momeni, Arash; Lee, Gordon K.
2015-01-01
Cloud-based applications including Box, Dropbox, Google Drive, Evernote, Notability, and Zotero are available for smartphones, tablets, and laptops and have revolutionized the manner in which medical students and surgeons read and utilize plastic surgery literature. Here we provide an overview of the use of Cloud computing in practice and propose an algorithm for organizing the vast amount of plastic surgery literature. Given the incredible amount of data being produced in plastic surgery and other surgical subspecialties, it is prudent for plastic surgeons to lead the process of providing solutions for the efficient organization and effective integration of the ever-increasing data into clinical practice. PMID:26576208
NASA Astrophysics Data System (ADS)
Van Damme, T.
2015-04-01
Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.
ADDRESSING ENVIRONMENTAL ENGINEERING CHALLENGES WITH COMPUTATIONAL FLUID DYNAMICS
In the field of environmental engineering, modeling tools are playing an ever larger role in addressing air quality issues, including source pollutant emissions, atmospheric dispersion and human exposure risks. More detailed modeling of environmental flows requires tools for c...
[Computer-assisted multimedia interactive learning program "Primary Open-Angle Glaucoma"].
Dick, V B; Zenz, H; Eisenmann, D; Tekaat, C J; Wagner, R; Jacobi, K W
1996-05-01
Advances in the area of information technology have opened up new possibilities for the use of interactive media in the training of medical students. Classical instructional technologies, such as video, slides, audio cassettes and computer programs with a textbook orientation, have been merged into one multimedia computer system. The medical profession has been increasingly integrating computer-based applications which can be used, for example, for record keeping within a medical practice. The goal of this development is to provide access to all modes of information storage and retrieval as well as documentation and training systems within a specific context. Since the beginning of the winter semester 1995, the Department of Ophthalmology in Giessen has used the learning program "Primary Open Angle Glaucoma" in student instruction. One factor that contributed to the implementation of this project was that actual training using patients within the clinic is difficult to conduct. Media-supported training that can provide a simulation of actual practice offers a suitable substitute. The learning program has been installed on Power PCs (Apple MacIntosh), which make up the technical foundation of our system. The program was developed using Hypercard software, which provides userfriendly graphical work environment. This controls the input and retrieval of data, direct editing of documents, immediate simulation, the creation of on-screen documents and the integration of slides that have been scanned in as well as QuickTime films. All of this can be accomplished without any special knowledge of programming language or operating systems on the part of the user. The glaucoma learning program is structured along the lines of anatomy, including an explanation of the circulation of the aqueous humor, pathology, clinical symptoms and findings, diagnosis and treatment. This structure along with the possibility for creating a list of personal files for the user with a collection of illustrations and text allows for quick access to learning content. The program is designed in such a way that working with and through it is done in a manner conducive to learning. Student response to the learning program as an accompaniment to instruction has been positive. Independent, supplemental student learning by means of an interactive learning program has raised the quality of study within the sciences. The use of a pedagogically sound multimedia program, that is oriented toward problem solving and based on actual cases offers students the opportunity to actively work ophthalmological material. An additional benefit is the development of competence in working with computer-support information systems, something that is playing an ever-increasing role within the medical profession.
Utilization of optical sensors for phasor measurement units
Yao, Wenxuan; Wells, David; King, Daniel; ...
2017-11-10
With the help of GPS signals for synchronization, increasingly ubiquitous phasor measurement units (PMUs) provide power grid operators unprecedented system monitoring and control opportunities. However, the performance of PMUs is limited by the inherent deficiencies in traditional transformers. To address these issues, an optical sensor is used in PMU for signal acquisition to replace the traditional transformers. This is the first time the utilization of an optical sensor in PMUs has ever been reported. The accuracy of frequency, angle, and amplitude are evaluated via experiments. Lastly, the optical sensor based PMU can achieve the accuracy of 9.03 × 10 –4more » Hz for frequency, 6.38 × 10 –3 rad for angle and 6.73 × 10 –2 V for amplitude with real power grid signal, demonstrating the practicability of optical sensors in future PMUs.« less
Utilization of optical sensors for phasor measurement units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Wenxuan; Wells, David; King, Daniel
With the help of GPS signals for synchronization, increasingly ubiquitous phasor measurement units (PMUs) provide power grid operators unprecedented system monitoring and control opportunities. However, the performance of PMUs is limited by the inherent deficiencies in traditional transformers. To address these issues, an optical sensor is used in PMU for signal acquisition to replace the traditional transformers. This is the first time the utilization of an optical sensor in PMUs has ever been reported. The accuracy of frequency, angle, and amplitude are evaluated via experiments. Lastly, the optical sensor based PMU can achieve the accuracy of 9.03 × 10 –4more » Hz for frequency, 6.38 × 10 –3 rad for angle and 6.73 × 10 –2 V for amplitude with real power grid signal, demonstrating the practicability of optical sensors in future PMUs.« less
Monitoring of Overhead Transmission Lines: A Review from the Perspective of Contactless Technologies
NASA Astrophysics Data System (ADS)
Khawaja, Arsalan Habib; Huang, Qi; Khan, Zeashan Hameed
2017-12-01
This paper describes a comprehensive review of non-contact technologies for overhead power transmission lines. Due to ever increasing emphasis on reducing accidents and speeding up diagnosis for automatically controlled grids, real time remote sensing and actuation is the new horizon for smart grid implementation. The technology overview with emphasis on the practical implementation of advanced non-contact technologies is discussed in this paper while considering optimization of the high voltage transmission lines parameters. In case of fault, the voltage and the current exceed limits of operation and hence real time reporting for control and diagnosis is a critical requirement. This paper aims to form a strong foundation for control and diagnosis of future power distribution systems so that a practitioner or researcher can make choices for a workable solution in smart grid implementation based on non-contact sensing.
Outstanding issues for new geothermal resource assessments
Williams, C.F.; Reed, M.J.
2005-01-01
A critical question for the future energy policy of the United States is the extent to which geothermal resources can contribute to an ever-increasing demand for electricity. Electric power production from geothermal sources exceeds that from wind and solar combined, yet the installed capacity falls far short of the geothermal resource base characterized in past assessments, even though the estimated size of the resource in six assessments completed in the past 35 years varies by thousands of Megawatts-electrical (MWe). The U. S. Geological Survey (USGS) is working closely with the Department of Energy's (DOE) Geothermal Research Program and other geothermal organizations on a three-year effort to produce an updated assessment of available geothermal resources. The new assessment will introduce significant changes in the models for geothermal energy recovery factors, estimates of reservoir permeability, limits to temperatures and depths for electric power production, and include the potential impact of evolving Enhanced (or Engineered) Geothermal Systems (EGS) technology.
Maxa, Jacob; Novikov, Andrej; Nowottnick, Mathias
2017-01-01
Modern high power electronics devices consists of a large amount of integrated circuits for switching and supply applications. Beside the benefits, the technology exhibits the problem of an ever increasing power density. Nowadays, heat sinks that are directly mounted on a device, are used to reduce the on-chip temperature and dissipate the thermal energy to the environment. This paper presents a concept of a composite coating for electronic components on printed circuit boards or electronic assemblies that is able to buffer a certain amount of thermal energy, dissipated from a device. The idea is to suppress temperature peaks in electronic components during load peaks or electronic shorts, which otherwise could damage or destroy the device, by using a phase change material to buffer the thermal energy. The phase change material coating could be directly applied on the chip package or the PCB using different mechanical retaining jigs.
Developing a multimodal biometric authentication system using soft computing methods.
Malcangi, Mario
2015-01-01
Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyna, David; Betty, Rita
Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis,thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities. The purpose of this project was to computationally model the impact of neural population dynamics within the neurobiological memory system in order to examine how subareas in the brain enable pattern separation and completion of information in memory across time as associated experiences.
Application of supercomputers to computational aerodynamics
NASA Technical Reports Server (NTRS)
Peterson, V. L.
1984-01-01
Computers are playing an increasingly important role in the field of aerodynamics such that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. Example results obtained from the successively refined forms of the governing equations are discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to problems of practical importance. Finally, the Numerical Aerodynamic Simulation (NAS) Program - with its 1988 target of achieving a sustained computational rate of 1 billion floating point operations per second and operating with a memory of 240 million words - is discussed in terms of its goals and its projected effect on the future of computational aerodynamics.
REVIEW ARTICLE: The next 50 years of the SI: a review of the opportunities for the e-Science age
NASA Astrophysics Data System (ADS)
Foster, Marcus P.
2010-12-01
The International System of Units (SI) was declared as a practical and evolving system in 1960 and is now 50 years old. A large amount of theoretical and experimental work has been conducted to change the standards for the base units from artefacts to physical constants, to improve their stability and reproducibility. Less attention, however, has been paid to improving the SI definitions, utility and usability, which suffer from contradictions, ambiguities and inconsistencies. While humans can often resolve these issues contextually, computers cannot. As an ever-increasing volume and proportion of data about physical quantities is collected, exchanged, processed and rendered by computers, this paper argues that the SI definitions, symbols and syntax should be made more rigorous, so they can be represented wholly and unambiguously in ontologies, programs, data and text, and so the SI notation can be rendered faithfully in print and on screen.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
[AERA. Dream machines and computing practices at the Mathematical Center].
Alberts, Gerard; De Beer, Huub T
2008-01-01
Dream machines may be just as effective as the ones materialised. Their symbolic thrust can be quite powerful. The Amsterdam 'Mathematisch Centrum' (Mathematical Center), founded February 11, 1946, created a Computing Department in an effort to realise its goal of serving society. When Aad van Wijngaarden was appointed as head of the Computing Department, however, he claimed space for scientific research and computer construction, next to computing as a service. Still, the computing service following the five stage style of Hartree's numerical analysis remained a dominant characteristic of the work of the Computing Department. The high level of ambition held by Aad van Wijngaarden lead to ever renewed projections of big automatic computers, symbolised by the never-built AERA. Even a machine that was actually constructed, the ARRA which followed A.D. Booth's design of the ARC, never made it into real operation. It did serve Van Wijngaarden to bluff his way into the computer age by midsummer 1952. Not until January 1954 did the computing department have a working stored program computer, which for reasons of policy went under the same name: ARRA. After just one other machine, the ARMAC, had been produced, a separate company, Electrologica, was set up for the manufacture of computers, which produced the rather successful X1 computer. The combination of ambition and absence of a working machine lead to a high level of work on programming, way beyond the usual ideas of libraries of subroutines. Edsger W. Dijkstra in particular led the way to an emphasis on the duties of the programmer within the pattern of numerical analysis. Programs generating programs, known elsewhere as autocoding systems, were at the 'Mathematisch Centrum' called 'superprograms'. Practical examples were usually called a 'complex', in Dutch, where in English one might say 'system'. Historically, this is where software begins. Dekker's matrix complex, Dijkstra's interrupt system, Dijkstra and Zonneveld's ALGOL compiler--which for housekeeping contained 'the complex'--were actual examples of such super programs. In 1960 this compiler gave the Mathematical Center a leading edge in the early development of software.
Correlation between disease severity and brain electric LORETA tomography in Alzheimer's disease.
Gianotti, Lorena R R; Künig, Gabriella; Lehmann, Dietrich; Faber, Pascal L; Pascual-Marqui, Roberto D; Kochi, Kieko; Schreiter-Gasser, Ursula
2007-01-01
To compare EEG power spectra and LORETA-computed intracortical activity between Alzheimer's disease (AD) patients and healthy controls, and to correlate the results with cognitive performance in the AD group. Nineteen channel resting EEG was recorded in 21 mild to moderate AD patients and in 23 controls. Power spectra and intracortical LORETA tomography were computed in seven frequency bands and compared between groups. In the AD patients, the EEG results were correlated with cognitive performance (Mini Mental State Examination, MMSE). AD patients showed increased power in EEG delta and theta frequency bands, and decreased power in alpha2, beta1, beta2 and beta3. LORETA specified that increases and decreases of power affected different cortical areas while largely sparing prefrontal cortex. Delta power correlated negatively and alpha1 power positively with the AD patients' MMSE scores; LORETA tomography localized these correlations in left temporo-parietal cortex. The non-invasive EEG method of LORETA localized pathological cortical activity in our mild to moderate AD patients in agreement with the literature, and yielded striking correlations between EEG delta and alpha1 activity and MMSE scores in left temporo-parietal cortex. The present data support the hypothesis of an asymmetrical progression of the Alzheimer's disease.
Benzekry, Sebastian; Tuszynski, Jack A; Rietman, Edward A; Lakka Klement, Giannoula
2015-05-28
The ever-increasing expanse of online bioinformatics data is enabling new ways to, not only explore the visualization of these data, but also to apply novel mathematical methods to extract meaningful information for clinically relevant analysis of pathways and treatment decisions. One of the methods used for computing topological characteristics of a space at different spatial resolutions is persistent homology. This concept can also be applied to network theory, and more specifically to protein-protein interaction networks, where the number of rings in an individual cancer network represents a measure of complexity. We observed a linear correlation of R = -0.55 between persistent homology and 5-year survival of patients with a variety of cancers. This relationship was used to predict the proteins within a protein-protein interaction network with the most impact on cancer progression. By re-computing the persistent homology after computationally removing an individual node (protein) from the protein-protein interaction network, we were able to evaluate whether such an inhibition would lead to improvement in patient survival. The power of this approach lied in its ability to identify the effects of inhibition of multiple proteins and in the ability to expose whether the effect of a single inhibition may be amplified by inhibition of other proteins. More importantly, we illustrate specific examples of persistent homology calculations, which correctly predict the survival benefit observed effects in clinical trials using inhibitors of the identified molecular target. We propose that computational approaches such as persistent homology may be used in the future for selection of molecular therapies in clinic. The technique uses a mathematical algorithm to evaluate the node (protein) whose inhibition has the highest potential to reduce network complexity. The greater the drop in persistent homology, the greater reduction in network complexity, and thus a larger potential for survival benefit. We hope that the use of advanced mathematics in medicine will provide timely information about the best drug combination for patients, and avoid the expense associated with an unsuccessful clinical trial, where drug(s) did not show a survival benefit.
ASEAN and Indochina: A Strategy for Regional Stability in the 1980’s.
1984-12-01
regional powers. The resultant regional balance of power is precarious, unstable, and ever threatens to deteriorate into armed conflict. The unequal...an armed force that is capable of large scale defense. Ironically, while ostensibly defensively motivated, these efforts have resulted in a war machine... resulted in the atmosphere of tense uncertainty in Southeast Asia today. In contrast to Vietnamese motivations for their force -- structure, the other
2015-09-03
THE GAS GENERATOR TO AN F-1 ENGINE, THE MOST POWERFUL ROCKET ENGINE EVER BUILT, IS TEST-FIRED AT NASA'S MARSHALL SPACE FLIGHT CENTER IN HUNTSVILLE, ALABAMA, ON SEPT. 3. ALTHOUGH THE ENGINE WAS ORIGINALLY BUILT TO POWER THE SATURN V ROCKETS DURING AMERICA'S MISSIONS TO THE MOON, THIS TEST ARTICLE HAD NEW PARTS CREATED USING ADDITIVE MANUFACTURING, OR 3-D PRINTING, TO TEST THE VIABILITY OF THE TECHNOLOGY FOR BUILDING NEW ENGINE DESIGNS.
Burris, Jessica L; Riley, Elizabeth; Puleo, Gabriella E; Smith, Gregory T
2017-09-01
Among early adolescents in the United States (U.S.), the prevalence of cigarette smoking is at its lowest level in recent decades. Nonetheless, given the risks of smoking in early development, it remains critically important to study both risk factors for smoking and risks from smoking. This longitudinal study with U.S. early adolescents examines smoking initiation and tests a model of reciprocal prediction between ever smoking and the personality trait of urgency (i.e., mood-based impulsivity), a trait that increases risk for multiple forms of dysfunction. Participants (n=1906; 90% 10-11 years old, 50% female, 39% racial minorities at baseline) completed questionnaires 1-2 times per year starting in 5th grade and ending in 9th grade. Structural equation modeling allowed tests of bidirectional relationships between ever smoking and urgency controlling for pubertal status and negative affect at each wave. Incidence of ever smoking increased from 5% to 27% over time, with current smoking around 5% at the last wave. Urgency at each wave predicted ever smoking at the next wave above and beyond covariates and prior smoking (all p<0.01). Likewise, with one exception, ever smoking predicted an increase in urgency at the subsequent wave above and beyond covariates and prior urgency (all p<0.05). Results show that risk for smoking increases with higher levels of urgency and urgency increases secondary to engagement in smoking. Future work should therefore explore urgency as a point of prevention for smoking and smoking cessation as a means to mitigate mood-based impulsivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Fracture and Failure at and Near Interfaces Under Pressure
1998-06-18
realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or
Boyle, Cullen; Liang, Liang; Chen, Yun; ...
2017-06-06
Here, the present work demonstrates the feasibility of increasing the values of Seebeck coefficient S and power factor of calcium cobaltite Ca 3Co 4O 9 ceramics through competing dopant grain boundary segregation. The nominal chemistry of the polycrystalline material system investigated is Ca 3–xBi xBa yCo 4O 9 with simultaneous stoichiometric substitution of Bi for Ca and non-stoichiometric addition of minute amounts of Ba. There is continuous increase of S due to Bi substitution and Ba addition. The electrical resistivity also changes upon doping. Overall, the power factor of best performing Bi and Ba co-doped sample is about 0.93 mWmore » m –1 K –2, which is one of the highest power factor values ever reported for Ca 3Co 4O 9, and corresponds to a factor of 3 increase compared to that of the baseline composition Ca 3Co 4O 9. Systematic nanostructure and chemistry characterization was performed on the samples with different nominal compositions. When Bi is the only dopant in Ca 3Co 4O 9, it can be found at both the grain interior and the grain boundaries GBs as a result of segregation. When Bi and Ba are added simultaneously as dopants, competing processes lead to the segregation of Ba and depletion of Bi at the GBs, with Bi present only in the grain interior. Bi substitution in the lattice increases the S at both the low and high temperature regimes, while the segregation of Ba at the GBs dramatically increase the S at low temperature regime.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyle, Cullen; Liang, Liang; Chen, Yun
Here, the present work demonstrates the feasibility of increasing the values of Seebeck coefficient S and power factor of calcium cobaltite Ca 3Co 4O 9 ceramics through competing dopant grain boundary segregation. The nominal chemistry of the polycrystalline material system investigated is Ca 3–xBi xBa yCo 4O 9 with simultaneous stoichiometric substitution of Bi for Ca and non-stoichiometric addition of minute amounts of Ba. There is continuous increase of S due to Bi substitution and Ba addition. The electrical resistivity also changes upon doping. Overall, the power factor of best performing Bi and Ba co-doped sample is about 0.93 mWmore » m –1 K –2, which is one of the highest power factor values ever reported for Ca 3Co 4O 9, and corresponds to a factor of 3 increase compared to that of the baseline composition Ca 3Co 4O 9. Systematic nanostructure and chemistry characterization was performed on the samples with different nominal compositions. When Bi is the only dopant in Ca 3Co 4O 9, it can be found at both the grain interior and the grain boundaries GBs as a result of segregation. When Bi and Ba are added simultaneously as dopants, competing processes lead to the segregation of Ba and depletion of Bi at the GBs, with Bi present only in the grain interior. Bi substitution in the lattice increases the S at both the low and high temperature regimes, while the segregation of Ba at the GBs dramatically increase the S at low temperature regime.« less
ISCR Annual Report: Fical Year 2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, J R
2005-03-03
Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technologymore » enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and long-term visits with the aim of encouraging long-term academic research agendas that address LLNL's research priorities. Through such collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''feet and hands'' that carry those advances into the Laboratory and incorporates them into practice. ISCR research participants are integrated into LLNL's Computing and Applied Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other five institutes of the URP, it navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort.« less
Recent developments in CO2 lasers
NASA Astrophysics Data System (ADS)
Du, Keming
1993-05-01
CO2 lasers have been used in industry mainly for such things as cutting, welding, and surface processing. To conduct a broad spectrum of high-speed and high-quality applications, most of the developments in industrial CO2 lasers at the ILT are aimed at increasing the output power, optimizing the beam quality, and reducing the production costs. Most of the commercial CO2 lasers above 5 kW are transverse-flow systems using dc excitation. The applications of these lasers are limited due to the lower beam quality, the poor point stability, and the lower modulation frequency. To overcome the problems we developed a fast axial- flow CO2 laser using rf excitation with an output of 13 kW. In section 2 some of the results are discussed concerning the gas flow, the discharge, the resonator design, optical effects of active medium, the aerodynamic window, and the modulation of the output power. The first CO2 lasers ever built are diffusion-cooled systems with conventional dc excited cylindrical discharge tubes surrounded by cooling jackets. The output power per unit length is limited to 50 W/m by those lasers with cylindrical tubes. In the past few years considerable increases in the output power were achieved, using new mechanical geometries, excitation- techniques, and resonator designs. This progress in diffusion-cooled CO2 lasers is presented in section 3.
Seplovich, Gabriela; Horvath, Keith J; Haughton, Lorlette J; Blackstock, Oni J
2017-03-31
For persons living with chronic medical conditions, the Internet can be a powerful tool for health promotion, and allow for immediate access to medical information and social support. However, women living with human immunodeficiency virus (HIV) in the United States face numerous barriers to computer and Internet use. Health behavior change models suggest that the first step towards adopting a new health behavior is to improve attitudes towards that behavior. To develop and pilot test Get+Connected, an intervention to improve computer and Internet attitudes and Internet use among women living with HIV. To develop Get+Connected, we reviewed the extant literature, adapted an existing curriculum, and conducted a focus group with HIV-positive women (n=20) at a community-based organization in the Bronx, New York. Get+Connected was comprised of five weekly sessions covering the following topics: basic computer knowledge and skills, identifying reliable health-related websites, setting up and using email and Facebook accounts, and a final review session. We recruited 12 women to participate in pilot testing. At baseline, we collected data about participants' sociodemographic information, clinical characteristics, and technology device ownership and use. At baseline, intervention completion, and three months postintervention, we collected data regarding attitudes towards computers and the Internet (Attitudes Towards Computers and the Internet Questionnaire [ATCIQ]; possible scores range from 5-50) as well as frequency of Internet use (composite measure). To examine changes in ATCIQ scores and Internet use over time, we used generalized estimating equations. We also collected qualitative data during intervention delivery. Among women in our sample, the median age was 56 years (interquartile range=52-63). All participants were black/African American and/or Latina. Seven participants (7/12, 58%) had a high school diploma (or equivalent) or higher degree. Ten participants (10/12, 83%) reported owning a mobile phone, while only one (1/12, 8%) reported owning a computer or tablet. Only one participant (1/12, 8%) reported having ever used the Internet or email. Internet nonusers cited lack of computer/Internet knowledge (6/11, 54%) and lack of access to a computer or similar device (4/11, 36%) as the main barriers to use. Over time, we observed an improvement in attitudes towards computers and the Internet (ATCIQ scores: 33.5 at baseline, 35 at intervention completion, and 36 at three months postintervention; P=.008). No significant increase in Internet use was observed (P=.61). Qualitative findings indicated excitement and enthusiasm for the intervention. In our sample of urban, technology-inexperienced HIV-positive women, participation in Get+Connected was associated with an improvement in attitudes towards computers and the Internet, but not Internet use. Changing attitudes is the first step in many health behavior change models, indicating that with improved access to computer and Internet resources, frequency of Internet use may also have increased. Future studies should consider addressing issues of access to technology in conjunction with Get+Connected. ©Gabriela Seplovich, Keith J Horvath, Lorlette J Haughton, Oni J Blackstock. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 31.03.2017.
Seplovich, Gabriela; Horvath, Keith J; Haughton, Lorlette J
2017-01-01
Background For persons living with chronic medical conditions, the Internet can be a powerful tool for health promotion, and allow for immediate access to medical information and social support. However, women living with human immunodeficiency virus (HIV) in the United States face numerous barriers to computer and Internet use. Health behavior change models suggest that the first step towards adopting a new health behavior is to improve attitudes towards that behavior. Objective To develop and pilot test Get+Connected, an intervention to improve computer and Internet attitudes and Internet use among women living with HIV. Methods To develop Get+Connected, we reviewed the extant literature, adapted an existing curriculum, and conducted a focus group with HIV-positive women (n=20) at a community-based organization in the Bronx, New York. Get+Connected was comprised of five weekly sessions covering the following topics: basic computer knowledge and skills, identifying reliable health-related websites, setting up and using email and Facebook accounts, and a final review session. We recruited 12 women to participate in pilot testing. At baseline, we collected data about participants’ sociodemographic information, clinical characteristics, and technology device ownership and use. At baseline, intervention completion, and three months postintervention, we collected data regarding attitudes towards computers and the Internet (Attitudes Towards Computers and the Internet Questionnaire [ATCIQ]; possible scores range from 5-50) as well as frequency of Internet use (composite measure). To examine changes in ATCIQ scores and Internet use over time, we used generalized estimating equations. We also collected qualitative data during intervention delivery. Results Among women in our sample, the median age was 56 years (interquartile range=52-63). All participants were black/African American and/or Latina. Seven participants (7/12, 58%) had a high school diploma (or equivalent) or higher degree. Ten participants (10/12, 83%) reported owning a mobile phone, while only one (1/12, 8%) reported owning a computer or tablet. Only one participant (1/12, 8%) reported having ever used the Internet or email. Internet nonusers cited lack of computer/Internet knowledge (6/11, 54%) and lack of access to a computer or similar device (4/11, 36%) as the main barriers to use. Over time, we observed an improvement in attitudes towards computers and the Internet (ATCIQ scores: 33.5 at baseline, 35 at intervention completion, and 36 at three months postintervention; P=.008). No significant increase in Internet use was observed (P=.61). Qualitative findings indicated excitement and enthusiasm for the intervention. Conclusions In our sample of urban, technology-inexperienced HIV-positive women, participation in Get+Connected was associated with an improvement in attitudes towards computers and the Internet, but not Internet use. Changing attitudes is the first step in many health behavior change models, indicating that with improved access to computer and Internet resources, frequency of Internet use may also have increased. Future studies should consider addressing issues of access to technology in conjunction with Get+Connected. PMID:28363879
Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, Naresh; Baone, Chaitanya; Veda, Santosh
2014-12-31
Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less
Bimolecular dynamics by computer analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.
1984-01-01
As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
Neyman, Markov processes and survival analysis.
Yang, Grace
2013-07-01
J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.
magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation
NASA Astrophysics Data System (ADS)
Angleraud, Christophe
2014-06-01
The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.
Interoperability Is Key to Smart Grid Success - Continuum Magazine | NREL
standards. Ever wonder what makes it possible to withdraw money securely from another bank's ATM, or call a communication allows access to money and phone calls nationwide, the Smart Grid-an automated electric power
NASA Astrophysics Data System (ADS)
Salvi, S.; Trasatti, E.; Rubbia, G.; Romaniello, V.; Spinetti, C.; Corradini, S.; Merucci, L.
2016-12-01
The EU's H2020 EVER-EST Project is dedicated to the realization of a Virtual Research Environment (VRE) for Earth Science researchers, during 2015-2018. EVER-EST implements state-of-the-art technologies in the area of Earth Science data catalogues, data access/processing and long-term data preservation together with models, techniques and tools for the computational methods, such as scientific workflows. The VRE is designed with the aim of providing the Earth Science user community with an innovative virtual environment to enhance their ability to interoperate and share knowledge and experience, exploiting also the Research Object concept. The GEO Geohazard Supersites is one of the four Research Communities chosen to validate the e-infrastructure. EVER-EST will help the exploitation of the full potential of the GEO Geohazard Supersite and Natural Laboratories (GSNL) initiative demonstrating the use case in the Permanent Supersites of Mt Etna, Campi Flegrei-Vesuvius, and Icelandic volcanoes. Besides providing tools for active volcanoes monitoring and studies, we intend to demonstrate how a more organized and collaborative research environment, such as a VRE, can improve the quality of the scientific research on the Geohazard Supersites, addressing at the same time the problem of the slow uptake of scientific research findings in Disaster Risk Management. Presently, the full exploitation of the in situ and satellite data made available for each Supersite is delayed by the difficult access (especially for researchers in developing countries) to intensive processing and modeling capabilities. EVER-EST is designed to provide these means and also a friendly virtual environment for the easy transfer of scientific knowledge as soon as it is acquired, promoting collaboration among researchers located in distant regions of the world. A further benefit will be to increase the societal impact of the scientific advancements obtained in the Supersites, allowing a more uniform interface towards the different user communities, who will use part of the services provided by EVER-EST during research result uptake. We show a few test cases of use of the Geohazard Supersite VRE at the actual state of development, and its future development.
Dust cyclone research in the 21st century
USDA-ARS?s Scientific Manuscript database
Research to meet the demand for ever more efficient dust cyclones continues after some eighty years. Recent trends emphasize design optimization through computational fluid dynamics (CFD) and testing design subtleties not modeled by semi-empirical equations. Improvements to current best available ...
Computational Methods for Stability and Control (COMSAC): The Time Has Come
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.
2005-01-01
Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.
1992-03-16
34A Hidden U.S. Export: Higher Education ." The WashinQton Post, 16 February 1992, H1 and H4. Brandin , David H., and Michael A. Harrison. The...frequent significant technological change now occurs within the individual person’s working lifespan, life-long education is a necessity to remain...INDUSTRIAL REVOLUTION The phenomenal increase in speed and in raw power of computer processors, the shrinking size and cost of basic computing systems, the
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
Profiling an application for power consumption during execution on a compute node
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-09-17
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
Improving UDP/IP Transmission Without Increasing Congestion
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2006-01-01
Datagram Retransmission (DGR) is a computer program that, within certain limits, ensures the reception of each datagram transmitted under the User Datagram Protocol/Internet Protocol. [User Datagram Protocol (UDP) is considered unreliable because it does not involve a reliability-ensuring connection-initiation dialogue between sender and receiver. UDP is well suited to issuing of many small messages to many different receivers.] Unlike prior software for ensuring reception of UDP datagrams, DGR does not contribute to network congestion by retransmitting data more frequently as an ever-increasing number of messages and acknowledgements is lost. Instead, DGR does just the opposite: DGR includes an adaptive timeout-interval- computing component that provides maximum opportunity for reception of acknowledgements, minimizing retransmission. By monitoring changes in the rate at which message-transmission transactions are completed, DGR detects changes in the level of congestion and responds by imposing varying degrees of delay on the transmission of new messages. In addition, DGR maximizes throughput by not waiting for acknowledgement of a message before sending the next message. All DGR communication is asynchronous, to maximize efficient utilization of network connections. DGR manages multiple concurrent datagram transmission and acknowledgement conversations.
Herweg, Andreas; Gutzeit, Julian; Kleih, Sonja; Kübler, Andrea
2016-12-01
Tactile event-related potential (ERP) are rarely used as input signal to control brain-computer-interfaces (BCI) due to their low accuracy and speed (information transfer rate, ITR). Age-related loss of tactile sensibility might further decrease their viability for the target population of BCI. In this study we investigated whether training improves tactile ERP-BCI performance within a virtual wheelchair navigation task. Elderly subjects participated in 5 sessions and tactors were placed at legs, abdomen and back. Mean accuracy and ITR increased from 88.43%/4.5bitsmin -1 in the 1st to 92.56%/4.98bitsmin -1 in the last session. The mean P300 amplitude increased from 5.46μV to 9.22μV. In an optional task participants achieved an accuracy of 95,56% and a mean ITR of 20,73bitsmin -1 which is the highest ever achieved with tactile stimulation. Our sample of elderly people further contributed to the external validity of our results. Copyright © 2016 Elsevier B.V. All rights reserved.
Mapping suitability areas for concentrated solar power plants using remote sensing data
Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.
2015-05-14
The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less