DOE Office of Scientific and Technical Information (OSTI.GOV)
Donald D Dudenhoeffer; Burce P Hallbert
Instrumentation, Controls, and Human-Machine Interface (ICHMI) technologies are essential to ensuring delivery and effective operation of optimized advanced Generation IV (Gen IV) nuclear energy systems. In 1996, the Watts Bar I nuclear power plant in Tennessee was the last U.S. nuclear power plant to go on line. It was, in fact, built based on pre-1990 technology. Since this last U.S. nuclear power plant was designed, there have been major advances in the field of ICHMI systems. Computer technology employed in other industries has advanced dramatically, and computing systems are now replaced every few years as they become functionally obsolete. Functionalmore » obsolescence occurs when newer, more functional technology replaces or supersedes an existing technology, even though an existing technology may well be in working order.Although ICHMI architectures are comprised of much of the same technology, they have not been updated nearly as often in the nuclear power industry. For example, some newer Personal Digital Assistants (PDAs) or handheld computers may, in fact, have more functionality than the 1996 computer control system at the Watts Bar I plant. This illustrates the need to transition and upgrade current nuclear power plant ICHMI technologies.« less
Teaching with Technology: The Classroom Manager. Cost-Conscious Computing.
ERIC Educational Resources Information Center
Smith, Rhea; And Others
1992-01-01
Teachers discuss how to make the most of technology in the classroom during a tight economy. Ideas include recycling computer printer ribbons, buying replacement batteries for computer power supply packs, upgrading via software, and soliciting donated computer equipment. (SM)
Techno-Human Mesh: The Growing Power of Information Technologies.
ERIC Educational Resources Information Center
West, Cynthia K.
This book examines the intersection of information technologies, power, people, and bodies. It explores how information technologies are on a path of creating efficiency, productivity, profitability, surveillance, and control, and looks at the ways in which human-machine interface technologies, such as wearable computers, biometric technologies,…
Power System Information Delivering System Based on Distributed Object
NASA Astrophysics Data System (ADS)
Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji
In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.
"Cloud" functions and templates of engineering calculations for nuclear power plants
NASA Astrophysics Data System (ADS)
Ochkov, V. F.; Orlov, K. A.; Ko, Chzho Ko
2014-10-01
The article deals with an important problem of setting up computer-aided design calculations of various circuit configurations and power equipment carried out using the templates and standard computer programs available in the Internet. Information about the developed Internet-based technology for carrying out such calculations using the templates accessible in the Mathcad Prime software package is given. The technology is considered taking as an example the solution of two problems relating to the field of nuclear power engineering.
Applications of aerospace technology in the electric power industry
NASA Technical Reports Server (NTRS)
Johnson, F. D.; Heins, C. F.
1974-01-01
Existing applications of NASA contributions to disciplines such as combustion engineering, mechanical engineering, materials science, quality assurance and computer control are outlined to illustrate how space technology is used in the electric power industry. Corporate strategies to acquire relevant space technology are described.
ERIC Educational Resources Information Center
Ekufu, ThankGod K.
2012-01-01
Organizations are finding it difficult in today's economy to implement the vast information technology infrastructure required to effectively conduct their business operations. Despite the fact that some of these organizations are leveraging on the computational powers and the cost-saving benefits of computing on the Internet cloud, others…
The impact of the pervasive information age on healthcare organizations.
Landry, Brett J L; Mahesh, Sathi; Hartman, Sandra J
2005-01-01
New information technologies place data on integrated information systems, and provide access via pervasive computing technologies. Pervasive computing puts computing power in the hands of all employees, available wherever it is needed. Integrated systems offer seamless data and process integration over diverse information systems. In this paper we look at the impact of these technologies on healthcare organizations in the future.
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-01-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined. PMID:19564686
Parallel, distributed and GPU computing technologies in single-particle electron microscopy.
Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger
2009-07-01
Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.
The Increasing Effects of Computers on Education.
ERIC Educational Resources Information Center
Gannon, John F.
Predicting that the teaching-learning process in American higher education is about to change drastically because of continuing innovations in computer-assisted technology, this paper argues that this change will be driven by inexpensive but powerful computer technology, and that it will manifest itself by reducing the traditional timing of…
ERIC Educational Resources Information Center
Udoh, Emmanuel E.
2010-01-01
Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…
The Impact of Iranian Teachers Cultural Values on Computer Technology Acceptance
ERIC Educational Resources Information Center
Sadeghi, Karim; Saribagloo, Javad Amani; Aghdam, Samad Hanifepour; Mahmoudi, Hojjat
2014-01-01
This study was conducted with the aim of testing the technology acceptance model and the impact of Hofstede cultural values (masculinity/femininity, uncertainty avoidance, individualism/collectivism, and power distance) on computer technology acceptance among teachers at Urmia city (Iran) using the structural equation modeling approach. From among…
Health care, ethics, and information technologies.
Curtin, Leah
2002-06-01
This essay explores how ethics, computing, and health care intersect in medical informatics. It discusses the power technology places in the hands of health care professionals and the ethical problems they may encounter as a result of that power.
IBM Cloud Computing Powering a Smarter Planet
NASA Astrophysics Data System (ADS)
Zhu, Jinzy; Fang, Xing; Guo, Zhe; Niu, Meng Hua; Cao, Fan; Yue, Shuang; Liu, Qin Yu
With increasing need for intelligent systems supporting the world's businesses, Cloud Computing has emerged as a dominant trend to provide a dynamic infrastructure to make such intelligence possible. The article introduced how to build a smarter planet with cloud computing technology. First, it introduced why we need cloud, and the evolution of cloud technology. Secondly, it analyzed the value of cloud computing and how to apply cloud technology. Finally, it predicted the future of cloud in the smarter planet.
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
Assessing the Decision Process towards Bring Your Own Device
ERIC Educational Resources Information Center
Koester, Richard F.
2017-01-01
Information technology continues to evolve to the point where mobile technologies--such as smart phones, tablets, and ultra-mobile computers have the embedded flexibility and power to be a ubiquitous platform to fulfill the entire user's computing needs. Mobile technology users view these platforms as adaptable enough to be the single solution for…
Live broadcast of laparoscopic surgery to handheld computers.
Gandsas, A; McIntire, K; Park, A
2004-06-01
Thanks to advances in computer power and miniaturization technology, portable electronic devices are now being used to assist physicians with various applications that extend far beyond Web browsing or sending e-mail. Handheld computers are used for electronic medical records, billing, coding, and to enable convenient access to electronic journals for reference purposes. The results of diagnostic investigations, such as laboratory results, study reports, and still radiographic pictures, can also be downloaded into portable devices for later view. Handheld computer technology, combined with wireless protocols and streaming video technology, has the added potential to become a powerful educational tool for medical students and residents. The purpose of this study was to assess the feasibility of transferring multimedia data in real time to a handheld computer via a wireless network and displaying them on the computer screens of clients at remote locations. A live laparoscopic splenectomy was transmitted live to eight handheld computers simultaneously through our institution's wireless network. All eight viewers were able to view the procedure and to hear the surgeon's comments throughout the entire duration of the operation. Handheld computer technology can play a key role in surgical education by delivering information to surgical residents or students when they are geographically distant from the actual event. Validation of this new technology by conducting clinical research is still needed to determine whether resident physicians or medical students can benefit from the use of handheld computers.
Saving Energy and Money: A Lesson in Computer Power Management
ERIC Educational Resources Information Center
Lazaros, Edward J.; Hua, David
2012-01-01
In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
2015-11-01
provided by a stand-alone desktop or hand held computing device. This introduces into the discussion a large number of mobile , tactical command...control, communications, and computer (C4) systems across the Services. A couple of examples are mobile command posts mounted on the back of an M1152... infrastructure (DCPI). This term encompasses on-site backup generators, switchgear, uninterruptible power supplies (UPS), power distribution units
Challenges and Security in Cloud Computing
NASA Astrophysics Data System (ADS)
Chang, Hyokyung; Choi, Euiin
People who live in this world want to solve any problems as they happen then. An IT technology called Ubiquitous computing should help the situations easier and we call a technology which makes it even better and powerful cloud computing. Cloud computing, however, is at the stage of the beginning to implement and use and it faces a lot of challenges in technical matters and security issues. This paper looks at the cloud computing security.
The Power of Computer-aided Tomography to Investigate Marine Benthic Communities
Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...
ERIC Educational Resources Information Center
Campbell, Heather M.
2010-01-01
Steam-powered machines, anachronistic technology, clockwork automatons, gas-filled airships, tentacled monsters, fob watches, and top hats--these are all elements of steampunk. Steampunk is both speculative fiction that imagines technology evolved from steam-powered cogs and gears--instead of from electricity and computers--and a movement that…
Distributed energy storage systems on the basis of electric-vehicle fleets
NASA Astrophysics Data System (ADS)
Zhuk, A. Z.; Buzoverov, E. A.; Sheindlin, A. E.
2015-01-01
Several power technologies directed to solving the problem of covering nonuniform loads in power systems are developed at the Joint Institute of High Temperatures, Russian Academy of Sciences (JIHT RAS). One direction of investigations is the use of storage batteries of electric vehicles to compensate load peaks in the power system (V2G—vehicle-to-grid technology). The efficiency of energy storage systems based on electric vehicles with traditional energy-saving technologies is compared in the article by means of performing computations. The comparison is performed by the minimum-cost criterion for the peak energy supply to the system. Computations show that the distributed storage systems based on fleets of electric cars are efficient economically with their usage regime to 1 h/day. In contrast to traditional methods, the prime cost of regulation of the loads in the power system based on V2G technology is independent of the duration of the load compensation period (the duration of the consumption peak).
Physics and Robotic Sensing -- the good, the bad, and approaches to making it work
NASA Astrophysics Data System (ADS)
Huff, Brian
2011-03-01
All of the technological advances that have benefited consumer electronics have direct application to robotics. Technological advances have resulted in the dramatic reduction in size, cost, and weight of computing systems, while simultaneously doubling computational speed every eighteen months. The same manufacturing advancements that have enabled this rapid increase in computational power are now being leveraged to produce small, powerful and cost-effective sensing technologies applicable for use in mobile robotics applications. Despite the increase in computing and sensing resources available to today's robotic systems developers, there are sensing problems typically found in unstructured environments that continue to frustrate the widespread use of robotics and unmanned systems. This talk presents how physics has contributed to the creation of the technologies that are making modern robotics possible. The talk discusses theoretical approaches to robotic sensing that appear to suffer when they are deployed in the real world. Finally the author presents methods being used to make robotic sensing more robust.
Efficiency improvement of technological preparation of power equipment manufacturing
NASA Astrophysics Data System (ADS)
Milukov, I. A.; Rogalev, A. N.; Sokolov, V. P.; Shevchenko, I. V.
2017-11-01
Competitiveness of power equipment primarily depends on speeding-up the development and mastering of new equipment samples and technologies, enhancement of organisation and management of design, manufacturing and operation. Actual political, technological and economic conditions cause the acute need in changing the strategy and tactics of process planning. At that the issues of maintenance of equipment with simultaneous improvement of its efficiency and compatibility to domestically produced components are considering. In order to solve these problems, using the systems of computer-aided process planning for process design at all stages of power equipment life cycle is economically viable. Computer-aided process planning is developed for the purpose of improvement of process planning by using mathematical methods and optimisation of design and management processes on the basis of CALS technologies, which allows for simultaneous process design, process planning organisation and management based on mathematical and physical modelling of interrelated design objects and production system. An integration of computer-aided systems providing the interaction of informative and material processes at all stages of product life cycle is proposed as effective solution to the challenges in new equipment design and process planning.
Eisler, Matthew N
Historians of science and technology have generally ignored the role of power sources in the development of consumer electronics. In this they have followed the predilections of historical actors. Research, development, and manufacturing of batteries has historically occurred at a social and intellectual distance from the research, development, and manufacturing of the devices they power. Nevertheless, power source technoscience should properly be understood as an allied yet estranged field of electronics. The separation between the fields has had important consequences for the design and manufacturing of mobile consumer electronics. This paper explores these dynamics in the co-construction of notebook batteries and computers. In so doing, it challenges assumptions of historians and industrial engineers and planners about the nature of computer systems in particular and the development of technological systems. The co-construction of notebook computers and batteries, and the occasional catastrophic failure of their compatibility, challenges systems thinking more generally.
An Overview of the AAVSO's Information Technology Infrastructure From 1967 to 1997
NASA Astrophysics Data System (ADS)
Kinne, R. C. S.
2012-06-01
Computer technology and data processing swept both society and the sciences like a wave in the latter half of the 20th century. We trace the AAVSO’s usage of computational and data processing technology from its beginnings in 1967, through 1997. We focus on equipment, people, and the purpose such computational power was put to, and compare and contrast the organization’s use of hardware and software with that of the wider industry.
ERIC Educational Resources Information Center
Gerjets, Peter H.; Hesse, Friedrich W.
2004-01-01
The goal of this chapter is to outline a theoretical and empirical perspective on how learners' conceptions of educational technology might influence their learning activities and thereby determine the power of computer-based learning environments. Starting with an introduction to the concept of powerful learning environments we outline how recent…
Computing technology in the 1980's. [computers
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.
Technological inductive power transfer systems
NASA Astrophysics Data System (ADS)
Madzharov, Nikolay D.; Nemkov, Valentin S.
2017-05-01
Inductive power transfer is a very fast expanding technology with multiple design principles and practical implementations ranging from charging phones and computers to bionic systems, car chargers and continuous power transfer in technological lines. Only a group of devices working in near magnetic field is considered. This article is devoted to overview of different inductive power transfer (IPT) devices. The review of literature in this area showed that industrial IPT are not much discussed and examined. The authors have experience in design and implementation of several types of IPTs belonging to wireless automotive chargers and to industrial application group. Main attention in the article is paid to principles and design of technological IPTs
Powerful Presentations with PowerPoint.
ERIC Educational Resources Information Center
Schenone-Stevens, M. Carla
As educational institutions prepare to meet the challenges of the new millennium, it becomes more apparent that computer-competent students should be graduated to meet the needs of the advances in technology in the workplace. One technology that is readily available is presentation software, which allows the student to generate slides, overheads,…
NASA Astrophysics Data System (ADS)
Ando, K.; Fujita, S.; Ito, J.; Yuasa, S.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.; Yoda, H.
2014-05-01
Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed.
2011-09-01
supply for the IMU switching 5, 12V ATX power supply for the computer and hard drive An L1/L2 active antenna on small back plane USB to serial...switching 5, 12V ATX power supply for the computer and hard drive Figure 4. UAS Target Location Technology for Ground Based Observers (TLGBO...15V power supply for the IMU H. switching 5, 12V ATX power supply for the computer & hard drive I. An L1/L2 active antenna on a small back
NASA Technical Reports Server (NTRS)
Myers, Roger M.
1991-01-01
Inhouse magnetoplasmadynamic (MPD) thruster technology is discussed. The study focussed on steady state thrusters at powers of less than 1 MW. Performance measurement and diagnostics technologies were developed for high power thrusters. Also developed was a MPD computer code. The stated goals of the program are to establish: performance and life limitation; influence of applied fields; propellant effects; and scaling laws. The presentation is mostly through graphs and charts.
Cloud Computing and the Power to Choose
ERIC Educational Resources Information Center
Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo
2010-01-01
Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…
Computer-Aided Engineering Tools | Water Power | NREL
energy converters that will provide a full range of simulation capabilities for single devices and arrays simulation of water power technologies on high-performance computers enables the study of complex systems and experimentation. Such simulation is critical to accelerate progress in energy programs within the U.S. Department
Continuous-variable quantum computing on encrypted data.
Marshall, Kevin; Jacobsen, Christian S; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L
2016-12-14
The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks.
Continuous-variable quantum computing on encrypted data
Marshall, Kevin; Jacobsen, Christian S.; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L.
2016-01-01
The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks. PMID:27966528
Continuous-variable quantum computing on encrypted data
NASA Astrophysics Data System (ADS)
Marshall, Kevin; Jacobsen, Christian S.; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L.
2016-12-01
The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks.
Implications of Ubiquitous Computing for the Social Studies Curriculum
ERIC Educational Resources Information Center
van Hover, Stephanie D.; Berson, Michael J.; Bolick, Cheryl Mason; Swan, Kathleen Owings
2004-01-01
In March 2002, members of the National Technology Leadership Initiative (NTLI) met in Charlottesville, Virginia to discuss the potential effects of ubiquitous computing on the field of education. Ubiquitous computing, or "on-demand availability of task-necessary computing power," involves providing every student with a handheld computer--a…
The feasibility of mobile computing for on-site inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horak, Karl Emanuel; DeLand, Sharon Marie; Blair, Dianna Sue
With over 5 billion cellphones in a world of 7 billion inhabitants, mobile phones are the most quickly adopted consumer technology in the history of the world. Miniaturized, power-efficient sensors, especially video-capable cameras, are becoming extremely widespread, especially when one factors in wearable technology like Apples Pebble, GoPro video systems, Google Glass, and lifeloggers. Tablet computers are becoming more common, lighter weight, and power-efficient. In this report the authors explore recent developments in mobile computing and their potential application to on-site inspection for arms control verification and treaty compliance determination. We examine how such technology can effectively be applied tomore » current and potential future inspection regimes. Use cases are given for both host-escort and inspection teams. The results of field trials and their implications for on-site inspections are discussed.« less
Boys and Machines: Gendered Computer Identities, Regulation and Resistance
ERIC Educational Resources Information Center
Abbiss, Jane
2011-01-01
Students negotiate their masculine and feminine identities as students of information and communication technology (ICT) and computer users as they participate in specialist ICT courses and in other areas of their lives. As they negotiate these roles, they are established in relations of power and authority with the technology and with each other.…
Linear and passive silicon diodes, isolators, and logic gates
NASA Astrophysics Data System (ADS)
Li, Zhi-Yuan
2013-12-01
Silicon photonic integrated devices and circuits have offered a promising means to revolutionalize information processing and computing technologies. One important reason is that these devices are compatible with conventional complementary metal oxide semiconductor (CMOS) processing technology that overwhelms current microelectronics industry. Yet, the dream to build optical computers has yet to come without the breakthrough of several key elements including optical diodes, isolators, and logic gates with low power, high signal contrast, and large bandwidth. Photonic crystal has a great power to mold the flow of light in micrometer/nanometer scale and is a promising platform for optical integration. In this paper we present our recent efforts of design, fabrication, and characterization of ultracompact, linear, passive on-chip optical diodes, isolators and logic gates based on silicon two-dimensional photonic crystal slabs. Both simulation and experiment results show high performance of these novel designed devices. These linear and passive silicon devices have the unique properties of small fingerprint, low power request, large bandwidth, fast response speed, easy for fabrication, and being compatible with COMS technology. Further improving their performance would open up a road towards photonic logics and optical computing and help to construct nanophotonic on-chip processor architectures for future optical computers.
1984-12-01
1980’s we are seeing enhancement of breadth, power, and accessibility of computers in many dimensions: o Pov~erfu1, costly fragile mainframes for...During the 1980’s we are seeing enhancement of breadth, power and accessibility of computers in many dimensions. (1) Powerful, costly, fragile mainframes... X A~ ’ EMORANDlUM FOR THE t-RAIRMAN, DEFENSE<. ’ ’...’"" S!B.FECT: Defense Science Board T is F- Supercomputei Applicai io, Yoi are requested to
Proceedings of the American Power Conference. Volume 58-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
McBride, A.E.
1996-11-01
This book is part 2 of the proceedings of the American Power Conference, Technology for Competition and Globalization, 1996. The topics of the papers include structural plant design; challenges of the global marketplace; thermal hydraulic methods for nuclear power plant safety and operation; decontamination and decommissioning; competitive operations and maintenance; fuel opportunities; cooling; competitive power pricing; operations; transformers; relays; plant controls; training to meet the competitive future; burning technologies; ash and byproducts utilization; advanced systems; computer tools for plant design; globalization of power; power system protection and power quality; life extension; grounding; and transmission line equipment.
An Overview of the Evolution of the AAVSO's Information Technology Infrastructure Between 1965-1997
NASA Astrophysics Data System (ADS)
Kinne, Richard C. S.; Saladyga, M.; Waagen, E. O.
2011-05-01
We trace the history and usage of computers and data processing equipment at the AAVSO HQ between its beginings in the 1960s to 1997. We focus on equipment, people, and the purpose such computational power was put to use. We examine how the AAVSO evolved its use of computing and data processing resources as the technology evolved in order to further its mission.
Study of Reversible Logic Synthesis with Application in SOC: A Review
NASA Astrophysics Data System (ADS)
Sharma, Chinmay; Pahuja, Hitesh; Dadhwal, Mandeep; Singh, Balwinder
2017-08-01
The prime concern in today’s SOC designs is the power dissipation which increases with technology scaling. The reversible logic possesses very high potential in reducing power dissipation in these designs. It finds its application in latest research fields such as DNA computing, quantum computing, ultra-low power CMOS design and nanotechnology. The reversible circuits can be easily designed using the conventional CMOS technology at a cost of a garbage output which maintains the reversibility. The purpose of this paper is to provide an overview of the developments that have occurred till date in this concept and how the new reversible logic gates are used to design the logic functions.
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro
2017-08-01
In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.
S-MMICs: Sub-mm-Wave Transistors and Integrated Circuits
2008-09-01
Research Lab BAA DAAD19-03-R-0017 Research area 2.35: RF devices—Dr. Alfred Hung Submitted by: Mark Rodwell, Department of Electrical and Computer ...MOTIVATION / APPLICATION 3 TECHNOLOGY STATUS 4 TRANSISTOR SCALING LAWS 5 256 NM GENERATION 6 HBT POWER AMPLIFIER DEVELOPMENT 7 DRY-ETCHED EMITTER...TECHNOLOGY: 256 NM GENERATION 9 SCALED EPITAXY 11 CONCLUSIONS 12 20081103013 Executive Summary Transistor and power amplifier IC technology was
Emerging Trends in Technology Education Computer Applications.
ERIC Educational Resources Information Center
Hazari, Sunil I.
1993-01-01
Graphical User Interface (GUI)--and its variant, pen computing--is rapidly replacing older types of operating environments. Despite its heavier demand for processing power, GUI has many advantages. (SK)
TABLET: The personal computer of the year 2000
NASA Technical Reports Server (NTRS)
Mel, Bartlett W.; Omohundro, Stephen M.; Robison, Arch D.; Skiena, Steven S.; Thearling, Kurt H.; Young, Luke T.; Wolfram, Stephen
1988-01-01
The University of Illinois design of the TABLET portable computer extends the freedom of pen and notepad with a machine that draws on the projected power of 21st century technology. Without assuming any new, major technological breakthroughs, it seeks to balance the promises of today's growing technologies with the changing role of computers in tomorrow's education, research, security, and commerce. It seeks to gather together in one basket the matured fruits of such buzzword technologies as LCD, GPS, CCD, WSI, and DSP. The design is simple, yet sleek. Roughly the size and weight of a notebook, the machine is a dark, featureless monolith with no moving parts. Through magneto-optics, a simple LaserCard provides exchangeable, mass data storage. Its I/O surface, in concert with built-in infrared and cellular transceivers, puts the user in touch with anyone and anything. The ensemble of these components, directed by software that can transform it into anything from a keyboard or notepad to an office or video studio, suggests an instrument of tremendous power and freedom.
Design of nodes for embedded and ultra low-power wireless sensor networks
NASA Astrophysics Data System (ADS)
Xu, Jun; You, Bo; Cui, Juan; Ma, Jing; Li, Xin
2008-10-01
Sensor network integrates sensor technology, MEMS (Micro-Electro-Mechanical system) technology, embedded computing, wireless communication technology and distributed information management technology. It is of great value to use it where human is quite difficult to reach. Power consumption and size are the most important consideration when nodes are designed for distributed WSN (wireless sensor networks). Consequently, it is of great importance to decrease the size of a node, reduce its power consumption and extend its life in network. WSN nodes have been designed using JN5121-Z01-M01 module produced by jennic company and IEEE 802.15.4/ZigBee technology. Its new features include support for CPU sleep modes and a long-term ultra low power sleep mode for the entire node. In low power configuration the node resembles existing small low power nodes. An embedded temperature sensor node has been developed to verify and explore our architecture. The experiment results indicate that the WSN has the characteristic of high reliability, good stability and ultra low power consumption.
PDAs in Teacher Education: A Case Study Examining Mobile Technology Integration
ERIC Educational Resources Information Center
Franklin, Teresa; Sexton, Colleen; Lu, Young; Ma, Hongyan
2007-01-01
The classroom computer is no longer confined to a box on the desk. Mobile handheld computing devices have evolved into powerful and affordable learning tools. Handheld technologies are changing the way people access and work with information. The use of Personal Digital Assistants (PDAs or handhelds) has been an evolving part of the business world…
ERIC Educational Resources Information Center
Oren, Michael Anthony
2011-01-01
The juxtaposition of classic sociological theory and the, relatively, young discipline of human-computer interaction (HCI) serves as a powerful mechanism for both exploring the theoretical impacts of technology on human interactions as well as the application of technological systems to moderate interactions. It is the intent of this dissertation…
NASA Tech Briefs, August 1995. Volume 19, No. 8
NASA Technical Reports Server (NTRS)
1995-01-01
There is a special focus on computer graphics and simulation in this issue. Topics covered include : Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer programs, Mechanics; Machinery; Fabrication Technology; and Mathematics and Information Sciences. There is a section on for Laser Technology, which includes a feature on Moving closer to the suns power.
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C. C.
The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less
ERIC Educational Resources Information Center
Brown, John Seely; Goldstein, Ira
A revolution that will transform learning in our society, altering both the methods and the content of education, has been made possible by harnessing tomorrow's powerful computer technology to serve as intelligent instructional systems. The unique quality of the computer that makes a revolution possible is that it can serve not only as a…
ERIC Educational Resources Information Center
Peled, Abraham
1987-01-01
Discusses some of the future trends in the use of the computer in our society, suggesting that computing is now entering a new phase in which it will grow exponentially more powerful, flexible, and sophisticated in the next decade. Describes some of the latest breakthroughs in computer hardware and software technology. (TW)
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Benny N.
2000-01-01
There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA-based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.
Mechanical Computing Redux: Limitations at the Nanoscale
NASA Astrophysics Data System (ADS)
Liu, Tsu-Jae King
2014-03-01
Technology solutions for overcoming the energy efficiency limits of nanoscale complementary metal oxide semiconductor (CMOS) technology ultimately will be needed in order to address the growing issue of integrated-circuit chip power density. Off-state leakage current sets a fundamental lower limit in energy per operation for any voltage-level-based digital logic implemented with transistors (CMOS and beyond), which leads to practical limits for device density (i.e. cost) and operating frequency (i.e. system performance). Mechanical switches have zero off-state leakag and hence can overcome this fundamental limit. Contact adhesive force sets a lower limit for the switching energy of a mechanical switch, however, and also directly impacts its performance. This paper will review recent progress toward the development of nano-electro-mechanical relay technology and discuss remaining challenges for realizing the promise of mechanical computing for ultra-low-power computing. Supported by the Center for Energy Efficient Electronics Science (NSF Award 0939514).
Computer-Aided Analysis of Patents for Product Technology Maturity Forecasting
NASA Astrophysics Data System (ADS)
Liang, Yanhong; Gan, Dequan; Guo, Yingchun; Zhang, Peng
Product technology maturity foresting is vital for any enterprises to hold the chance for innovation and keep competitive for a long term. The Theory of Invention Problem Solving (TRIZ) is acknowledged both as a systematic methodology for innovation and a powerful tool for technology forecasting. Based on TRIZ, the state -of-the-art on the technology maturity of product and the limits of application are discussed. With the application of text mining and patent analysis technologies, this paper proposes a computer-aided approach for product technology maturity forecasting. It can overcome the shortcomings of the current methods.
Applications of Multi-Agent Technology to Power Systems
NASA Astrophysics Data System (ADS)
Nagata, Takeshi
Currently, agents are focus of intense on many sub-fields of computer science and artificial intelligence. Agents are being used in an increasingly wide variety of applications. Many important computing applications such as planning, process control, communication networks and concurrent systems will benefit from using multi-agent system approach. A multi-agent system is a structure given by an environment together with a set of artificial agents capable to act on this environment. Multi-agent models are oriented towards interactions, collaborative phenomena, and autonomy. This article presents the applications of multi-agent technology to the power systems.
Cloud Technology May Widen Genomic Bottleneck - TCGA
Computational biologist Dr. Ilya Shmulevich suggests that renting cloud computing power might widen the bottleneck for analyzing genomic data. Learn more about his experience with the Cloud in this TCGA in Action Case Study.
1990-12-01
small powerful computers to businesses and homes on an international scale (29:74). Relatively low cost, high computing power , and ease of operation were...is performed. In large part, today’s AF IM professional has been inundated with powerful new technologies which were rapidly introduced and inserted...state that, "In a survey of five years of MIS research, we fouind the averane levels of statistical power to be relatively low (5:104). In their own
The Space Technology 5 Avionics System
NASA Technical Reports Server (NTRS)
Speer, Dave; Jackson, George; Stewart, Karen; Hernandez-Pellerano, Amri
2004-01-01
The Space Technology 5 (ST5) mission is a NASA New Millennium Program project that will validate new technologies for future space science missions and demonstrate the feasibility of building launching and operating multiple, miniature spacecraft that can collect research-quality in-situ science measurements. The three satellites in the ST5 constellation will be launched into a sun-synchronous Earth orbit in early 2006. ST5 fits into the 25-kilogram and 24-watt class of very small but fully capable spacecraft. The new technologies and design concepts for a compact power and command and data handling (C&DH) avionics system are presented. The 2-card ST5 avionics design incorporates new technology components while being tightly constrained in mass, power and volume. In order to hold down the mass and volume, and quali& new technologies for fUture use in space, high efficiency triple-junction solar cells and a lithium-ion battery were baselined into the power system design. The flight computer is co-located with the power system electronics in an integral spacecraft structural enclosure called the card cage assembly. The flight computer has a full set of uplink, downlink and solid-state recording capabilities, and it implements a new CMOS Ultra-Low Power Radiation Tolerant logic technology. There were a number of challenges imposed by the ST5 mission. Specifically, designing a micro-sat class spacecraft demanded that minimizing mass, volume and power dissipation would drive the overall design. The result is a very streamlined approach, while striving to maintain a high level of capability, The mission's radiation requirements, along with the low voltage DC power distribution, limited the selection of analog parts that can operate within these constraints. The challenge of qualifying new technology components for the space environment within a short development schedule was another hurdle. The mission requirements also demanded magnetic cleanliness in order to reduce the effect of stray (spacecraft-generated) magnetic fields on the science-grade magnetometer.
Big data computing: Building a vision for ARS information management
USDA-ARS?s Scientific Manuscript database
Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...
Digital Technologies Jump Start Telemedicine
ERIC Educational Resources Information Center
Pierce, Alan
2011-01-01
The technologists, research doctors, computer programmers, electrical engineers, and biomedical engineers who design and create new medical diagnostic equipment and medical treatments are now working on medical systems that use the computing power of personal computers and cell phones. For disease diagnosis, they are finding ways to shrink…
Agent-Based Multicellular Modeling for Predictive Toxicology
Biological modeling is a rapidly growing field that has benefited significantly from recent technological advances, expanding traditional methods with greater computing power, parameter-determination algorithms, and the development of novel computational approaches to modeling bi...
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
Spacecraft computer technology at Southwest Research Institute
NASA Technical Reports Server (NTRS)
Shirley, D. J.
1993-01-01
Southwest Research Institute (SwRI) has developed and delivered spacecraft computers for a number of different near-Earth-orbit spacecraft including shuttle experiments and SDIO free-flyer experiments. We describe the evolution of the basic SwRI spacecraft computer design from those weighing in at 20 to 25 lb and using 20 to 30 W to newer models weighing less than 5 lb and using only about 5 W, yet delivering twice the processing throughput. Because of their reduced size, weight, and power, these newer designs are especially applicable to planetary instrument requirements. The basis of our design evolution has been the availability of more powerful processor chip sets and the development of higher density packaging technology, coupled with more aggressive design strategies in incorporating high-density FPGA technology and use of high-density memory chips. In addition to reductions in size, weight, and power, the newer designs also address the necessity of survival in the harsh radiation environment of space. Spurred by participation in such programs as MSTI, LACE, RME, Delta 181, Delta Star, and RADARSAT, our designs have evolved in response to program demands to be small, low-powered units, radiation tolerant enough to be suitable for both Earth-orbit microsats and for planetary instruments. Present designs already include MIL-STD-1750 and Multi-Chip Module (MCM) technology with near-term plans to include RISC processors and higher-density MCM's. Long term plans include development of whole-core processors on one or two MCM's.
Nano Goes Magnetic to Attract Big Business
NASA Technical Reports Server (NTRS)
2006-01-01
Glenn Research Center has combined state-of-the-art electrical designs with complex, computer-aided analyses to develop some of today s most advanced power systems, in space and on Earth. The center s Power and On-Board Propulsion Technology Division is the brain behind many of these power systems. For space, this division builds technologies that help power the International Space Station, the Hubble Space Telescope, and Earth-orbiting satellites. For Earth, it has woven advanced aerospace power concepts into commercial energy applications that include solar and nuclear power generation, battery and fuel cell energy storage, communications and telecommunications satellites, cryocoolers, hybrid and electric vehicles, and heating and air-conditioning systems.
Thomas, Bex George; Elasser, Ahmed; Bollapragada, Srinivas; Galbraith, Anthony William; Agamy, Mohammed; Garifullin, Maxim Valeryevich
2016-03-29
A system and method of using one or more DC-DC/DC-AC converters and/or alternative devices allows strings of multiple module technologies to coexist within the same PV power plant. A computing (optimization) framework estimates the percentage allocation of PV power plant capacity to selected PV module technologies. The framework and its supporting components considers irradiation, temperature, spectral profiles, cost and other practical constraints to achieve the lowest levelized cost of electricity, maximum output and minimum system cost. The system and method can function using any device enabling distributed maximum power point tracking at the module, string or combiner level.
Computer modelling of technogenic thermal pollution zones in large water bodies
NASA Astrophysics Data System (ADS)
Parshakova, Ya N.; Lyubimova, T. P.
2018-01-01
In the present work, the thermal pollution zones created due to discharge of heated water from thermal power plants are investigated using the example of the Permskaya Thermal Power Plant (Permskaya TPP or Permskaya GRES), which is one of the largest thermal power plants in Europe. The study is performed for different technological and hydrometeorological conditions. Since the vertical temperature distribution in such wastewater reservoirs is highly inhomogeneous, the computations are performed in the framework of 3D model.
Expeditionary Oblong Mezzanine
2016-03-01
Operating System OSI Open Systems Interconnection OS X Operating System Ten PDU Power Distribution Unit POE Power Over Ethernet xvii SAAS ...providing infrastructure as a service (IaaS) and software as a service ( SaaS ) cloud computing technologies. IaaS is a way of providing computing services...such as servers, storage, and network equipment services (Mell & Grance, 2009). SaaS is a means of providing software and applications as an on
Using tablet technology in operational radiation safety applications.
Phillips, Andrew; Linsley, Mark; Houser, Mike
2013-11-01
Tablet computers have become a mainstream product in today's personal, educational, and business worlds. These tablets offer computing power, storage, and a wide range of available products to meet nearly every user need. To take advantage of this new computing technology, a system was developed for the Apple iPad (Apple Inc. 1 Infinite Loop Cupertino, CA 95014) to perform health and safety inspections in the field using editable PDFs and saving them to a database while keeping the process easy and paperless.
Training the Trainers in Technology.
ERIC Educational Resources Information Center
Burgan, Owen
The key to successful harnessing of the power and potential of new educational technologies lies in appropriate training of teachers. An educational technology joint venture was created at the Northern Territory University in Darwin (Australia) in which the Institute of Technical and Further Education provided the equipment, the Computing Services…
Perspectives on next-generation technology for environmental sensor networks
Barbara J. Benson; Barbara J. Bond; Michael P. Hamilton; Russell K. Monson; Richard Han
2009-01-01
Sensor networks promise to transform and expand environmental science. However, many technological difficulties must be overcome to achieve this potential. Partnerships of ecologists with computer scientists and engineers are critical in meeting these challenges. Technological issues include promoting innovation in new sensor design, incorporating power optimization...
A Brief Analysis of Development Situations and Trend of Cloud Computing
NASA Astrophysics Data System (ADS)
Yang, Wenyan
2017-12-01
in recent years, the rapid development of Internet technology has radically changed people's work, learning and lifestyles. More and more activities are completed by virtue of computers and networks. The amount of information and data generated is bigger day by day, and people rely more on computer, which makes computing power of computer fail to meet demands of accuracy and rapidity from people. The cloud computing technology has experienced fast development, which is widely applied in the computer industry as a result of advantages of high precision, fast computing and easy usage. Moreover, it has become a focus in information research at present. In this paper, the development situations and trend of cloud computing shall be analyzed and researched.
JPRS Report, Science & Technology, China.
1992-12-08
impor- tance of the computer information industry to the develop- ment of the national economy and the people’s standard of living. Forecasts call...past several years, and the application of computers has permeated every trade and industry , providing powerful SCIENCE & TECHNOLOGY POLICY JPRS...system and ample human talent; market potential is large; and it has potential for low cost develop- ment. However, the scale of its industrial
Computer Problem-Solving Coaches for Introductory Physics: Design and Usability Studies
ERIC Educational Resources Information Center
Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew
2016-01-01
The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how…
Mobile Computing: Trends Enabling Virtual Management
ERIC Educational Resources Information Center
Kuyatt, Alan E.
2011-01-01
The growing power of mobile computing, with its constantly available wireless link to information, creates an opportunity to use innovative ways to work from any location. This technological capability allows companies to remove constraints of physical proximity so that people and enterprises can work together at a distance. Mobile computing is…
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
Computers in the Classroom: A Feminist Issue.
ERIC Educational Resources Information Center
Stalker, Sylvia
Women stand to lose a great deal in the information revolution if they fail to master computer technology. Presently they are taking little part in the computer field. In the future, access to jobs, information, and power will depend on computer knowledge and skills. Women will be pushed into lower status jobs or out of jobs altogether,…
ERIC Educational Resources Information Center
Chien, Tien-Chen
2008-01-01
Computer is not only a powerful technology for managing information and enhancing productivity, but also an efficient tool for education and training. Computer anxiety can be one of the major problems that affect the effectiveness of learning. Through analyzing related literature, this study describes the phenomenon of computer anxiety,…
Dense, Efficient Chip-to-Chip Communication at the Extremes of Computing
ERIC Educational Resources Information Center
Loh, Matthew
2013-01-01
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural…
Flight Computer Design for the Space Technology 5 (ST-5) Mission
NASA Technical Reports Server (NTRS)
Speer, David; Jackson, George; Raphael, Dave; Day, John H. (Technical Monitor)
2001-01-01
As part of NASA's New Millennium Program, the Space Technology 5 mission will validate a variety of technologies for nano-satellite and constellation mission applications. Included are: a miniaturized and low power X-band transponder, a constellation communication and navigation transceiver, a cold gas micro-thruster, two different variable emittance (thermal) controllers, flex cables for solar array power collection, autonomous groundbased constellation management tools, and a new CMOS ultra low-power, radiation-tolerant, +0.5 volt logic technology. The ST-5 focus is on small and low-power. A single-processor, multi-function flight computer will implement direct digital and analog interfaces to all of the other spacecraft subsystems and components. There will not be a distributed data system that uses a standardized serial bus such as MIL-STD-1553 or MIL-STD-1773. The flight software running on the single processor will be responsible for all real-time processing associated with: guidance, navigation and control, command and data handling (C&DH) including uplink/downlink, power switching and battery charge management, science data analysis and storage, intra-constellation communications, and housekeeping data collection and logging. As a nanosatellite trail-blazer for future constellations of up to 100 separate space vehicles, ST-5 will demonstrate a compact (single board), low power (5.5 watts) solution to the data acquisition, control, communications, processing and storage requirements that have traditionally required an entire network of separate circuit boards and/or avionics boxes. In addition to the New Millennium technologies, other major spacecraft subsystems include the power system electronics, a lithium-ion battery, triple-junction solar cell arrays, a science-grade magnetometer, a miniature spinning sun sensor, and a propulsion system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Felicia Angelica; Waymire, Russell L.
2013-10-01
Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less
COMPUTER TECHNOLOGY AND SOCIAL CHANGE,
This paper presents a discussion of the social , political, economic and psychological problems associated with the rapid growth and development of...public officials and responsible groups is required to increase public understanding of the computer as a powerful tool, to select appropriate
Recapturing Technology for Education: Keeping Tomorrow in Today's Classrooms
ERIC Educational Resources Information Center
Gura, Mark; Percy, Bernard
2005-01-01
Despite significant investment of funds, time, and effort in bringing computers, the Internet, and related technologies into the classrooms, educators have turned their back on these new power tools of the intellect. School is the last remaining institution to keep 21st Century technology at arms distance. How can technology be used to enrich and…
The international water conference proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guseman, J.R.
1984-10-01
This book provides information on computer applications to water chemistry control, groundwater, membrane technology, instrumentation/analytical techniques and ion exchange. Other topics of discussion include cooling water, biocontrol, the hydraulic properties of ion exchange resins, steam electric power plant aqueous discharges and colorimetric determination of trace benzotriazole or tolytriazole. Water chemistry guidelines for large steam generating power plants is discussed, as well as wastewater treatment, boiler water conditioning and ion exchange/computer related topics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.
Energy regeneration model of self-consistent field of electron beams into electric power*
NASA Astrophysics Data System (ADS)
Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.
2016-04-01
We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.
Affordable Emerging Computer Hardware for Neuromorphic Computing Applications
2011-09-01
DATES COVERED (From - To) 4 . TITLE AND SUBTITLE AFFORDABLE EMERGING COMPUTER HARDWARE FOR NEUROMORPHIC COMPUTING APPLICATIONS 5a. CONTRACT NUMBER...speedup over software [3, 4 ]. 3 Table 1 shows a comparison of the computing performance, communication performance, power consumption...time is probably 5 frames per second, corresponding to 5 saccades. III. RESULTS AND DISCUSSION The use of IBM Cell-BE technology (Sony PlayStation
Making Connections: Power at Your Fingertips. Resources in Technology.
ERIC Educational Resources Information Center
Deal, Walter F., III
1997-01-01
Discusses inventions and innovations in battery technology. Includes information about batteries that have produced products such as cellular telephones, portable computers, and camcorders. Also describes lithium and solid state batteries and offers tips on battery safety. (JOW)
NASA Astrophysics Data System (ADS)
Tekin, Tolga; Töpper, Michael; Reichl, Herbert
2009-05-01
Technological frontiers between semiconductor technology, packaging, and system design are disappearing. Scaling down geometries [1] alone does not provide improvement of performance, less power, smaller size, and lower cost. It will require "More than Moore" [2] through the tighter integration of system level components at the package level. System-in-Package (SiP) will deliver the efficient use of three dimensions (3D) through innovation in packaging and interconnect technology. A key bottleneck to the implementation of high-performance microelectronic systems, including SiP, is the lack of lowlatency, high-bandwidth, and high density off-chip interconnects. Some of the challenges in achieving high-bandwidth chip-to-chip communication using electrical interconnects include the high losses in the substrate dielectric, reflections and impedance discontinuities, and susceptibility to crosstalk [3]. Obviously, the incentive for the use of photonics to overcome the challenges and leverage low-latency and highbandwidth communication will enable the vision of optical computing within next generation architectures. Supercomputers of today offer sustained performance of more than petaflops, which can be increased by utilizing optical interconnects. Next generation computing architectures are needed with ultra low power consumption; ultra high performance with novel interconnection technologies. In this paper we will discuss a CMOS compatible underlying technology to enable next generation optical computing architectures. By introducing a new optical layer within the 3D SiP, the development of converged microsystems, deployment for next generation optical computing architecture will be leveraged.
Overview of free-piston Stirling technology at the NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slaby, J. G.
1985-01-01
An overview of the National Aeronautics and Space Administration (NASA) Lewis Research Center (Lewis) free-piston Stirling engine activities is presented. These activities include: (1) a generic free-piston Stirling technology project being conducted to develop technologies synergistic to both space power and terrestrial heat pump applications in a cooperative, cost-shared effort with the Department of Energy (DOE/Oak Ridge National Laboratory (ONRL)), and (2) a free-piston Stirling space-power technology demonstration project as part of the SP-100 program being conducted in support of the Department of Defense (DOD), DOE, and NASA/Lewis. The generic technology effort includes extensive parametric testing of a 1 kw free-piston Stirling engine (RE-1000), development and validation of a free-piston Stirling performance computer code, and fabrication and initial testing of an hydraulic output modification for the RE-1000 engine. The space power technology effort, under SP-100, addresses the status of the 25 kWe Space Power Demonstrator Engine (SPDE) including early test results.
Computer Competency of Nursing Students at a University in Thailand
ERIC Educational Resources Information Center
Niyomkar, Srimana
2012-01-01
During the past years, computer and information technology has been rapidly integrated into the education and healthcare fields. In the 21st century, computers are more powerful than ever, and are used in all aspects of nursing, including education, practice, policy, and research. Consequently, student nurses will need to utilize computer…
Clarkson First College to Require Computer Literacy.
ERIC Educational Resources Information Center
Technological Horizons in Education, 1983
1983-01-01
Freshmen at Clarkson College of Technology (Potsdam, NY) will be issued a Zenith microcomputer. Every aspect of Clarkson's curriculum will be redesigned to capitalize on the new computing and word processing power. Students will pay $200/semester and a one-time $200 maintenance fee and will keep the computer when they graduate. (Author/JN)
Proceduracy: Computer Code Writing in the Continuum of Literacy
ERIC Educational Resources Information Center
Vee, Annette
2010-01-01
This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…
A SOCIO-ECONOMIST LOOKS AT THE CURRENT VALUES AND CHANGING NEEDS OF YOUTH. FINAL DRAFT.
ERIC Educational Resources Information Center
THEOBALD, ROBERT
MAN HAS ACHIEVED THE POWER TO CREATE AN ENVIRONMENT SUITED TO HIS NEEDS. THIS POWER COMES FROM DEVELOPMENTS IN THE UTILIZATION OF ENERGY, ADVANCEMENTS IN CHEMISTRY, AN INCREASE IN SCIENTIFIC PROBLEM SOLVING ABILITY AND COMPUTER TECHNOLOGY. THESE SOURCES OF POWER RESULT IN THE DRIVE TOWARD THE DEVELOPMENT OF DESTRUCTIVE POWER, THE CAPABILITY OF…
The transforming effect of handheld computers on nursing practice.
Thompson, Brent W
2005-01-01
Handheld computers have the power to transform nursing care. The roots of this power are the shift to decentralization of communication, electronic health records, and nurses' greater need for information at the point of care. This article discusses the effects of handheld resources, calculators, databases, electronic health records, and communication devices on nursing practice. The US government has articulated the necessity of implementing the use of handheld computers in healthcare. Nurse administrators need to encourage and promote the diffusion of this technology, which can reduce costs and improve care.
Responding to Information Needs in the 1980s.
ERIC Educational Resources Information Center
McGraw, Harold W., Jr.
1979-01-01
Argues that technological developments in cable television, computers, and telecommunications could decentralize power and put the resources of the new technology more broadly at the command of individuals and small groups, but that this potential requires action to be realized. (Author)
Federal Barriers to Innovation
ERIC Educational Resources Information Center
Miller, Raegen; Lake, Robin
2012-01-01
With educational outcomes inadequate, resources tight, and students' academic needs growing more complex, America's education system is certainly ready for technological innovation. And technology itself is ripe to be exploited. Devices harnessing cheap computing power have become smart and connected. Voice recognition, artificial intelligence,…
Mobile Assisted Language Learning: Review of the Recent Applications of Emerging Mobile Technologies
ERIC Educational Resources Information Center
Yang, Jaeseok
2013-01-01
As mobile computing technologies have been more powerful and inclusive in people's daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study…
Analysis on energy consumption index system of thermal power plant
NASA Astrophysics Data System (ADS)
Qian, J. B.; Zhang, N.; Li, H. F.
2017-05-01
Currently, the increasingly tense situation in the context of resources, energy conservation is a realistic choice to ease the energy constraint contradictions, reduce energy consumption thermal power plants has become an inevitable development direction. And combined with computer network technology to build thermal power “small index” to monitor and optimize the management system, the power plant is the application of information technology and to meet the power requirements of the product market competition. This paper, first described the research status of thermal power saving theory, then attempted to establish the small index system and build “small index” monitoring and optimization management system in thermal power plant. Finally elaborated key issues in the field of small thermal power plant technical and economic indicators to be further studied and resolved.
Borresen, Jon; Lynch, Stephen
2012-01-01
In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
Technology Assessment: 1983 Forecast of Future Test Technology Requirements.
1983-06-01
effectively utilizes existing vehicle space , power and support equipment while maintaining critical interfaces with on-board computers and fire control...Scan Converter EAR Electronically Agile Radar E-O Electro-Optics FET Field Effect Transistor FLIR Forward Looking Infrared GaAs Gallium Arsenide HEL...They might be a part of a large ATE system due to such things as the environmental effects on noise and signal/power loss. A summary of meaningful
Tools and Techniques for Measuring and Improving Grid Performance
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
Study on key technologies of optimization of big data for thermal power plant performance
NASA Astrophysics Data System (ADS)
Mao, Mingyang; Xiao, Hong
2018-06-01
Thermal power generation accounts for 70% of China's power generation, the pollutants accounted for 40% of the same kind of emissions, thermal power efficiency optimization needs to monitor and understand the whole process of coal combustion and pollutant migration, power system performance data show explosive growth trend, The purpose is to study the integration of numerical simulation of big data technology, the development of thermal power plant efficiency data optimization platform and nitrogen oxide emission reduction system for the thermal power plant to improve efficiency, energy saving and emission reduction to provide reliable technical support. The method is big data technology represented by "multi-source heterogeneous data integration", "large data distributed storage" and "high-performance real-time and off-line computing", can greatly enhance the energy consumption capacity of thermal power plants and the level of intelligent decision-making, and then use the data mining algorithm to establish the boiler combustion mathematical model, mining power plant boiler efficiency data, combined with numerical simulation technology to find the boiler combustion and pollutant generation rules and combustion parameters of boiler combustion and pollutant generation Influence. The result is to optimize the boiler combustion parameters, which can achieve energy saving.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
An exploration of neuromorphic systems and related design issues/challenges in dark silicon era
NASA Astrophysics Data System (ADS)
Chandaliya, Mudit; Chaturvedi, Nitin; Gurunarayanan, S.
2018-03-01
The current microprocessors has shown a remarkable performance and memory capacity improvement since its innovation. However, due to power and thermal limitations, only a fraction of cores can operate at full frequency at any instant of time irrespective of the advantages of new technology generation. This phenomenon of under-utilization of microprocessor is called as dark silicon which leads to distraction in innovative computing. To overcome the limitation of utilization wall, IBM technologies explored and invented neurosynaptic system chips. It has opened a wide scope of research in the field of innovative computing, technology, material sciences, machine learning etc. In this paper, we first reviewed the diverse stages of research that have been influential in the innovation of neurosynaptic architectures. These, architectures focuses on the development of brain-like framework which is efficient enough to execute a broad set of computations in real time while maintaining ultra-low power consumption as well as area considerations in mind. We also reveal the inadvertent challenges and the opportunities of designing neuromorphic systems as presented by the existing technologies in the dark silicon era, which constitute the utmost area of research in future.
Children, Computers, and Powerful Ideas
ERIC Educational Resources Information Center
Bull, Glen
2005-01-01
Today it is commonplace that computers and technology permeate almost every aspect of education. In the late 1960s, though, the idea that computers could serve as a catalyst for thinking about the way children learn was a radical concept. In the early 1960s, Seymour Papert joined the faculty of MIT and founded the Artificial Intelligence Lab with…
Robot, computer problem solving system
NASA Technical Reports Server (NTRS)
Becker, J. D.; Merriam, E. W.
1973-01-01
The TENEX computer system, the ARPA network, and computer language design technology was applied to support the complex system programs. By combining the pragmatic and theoretical aspects of robot development, an approach is created which is grounded in realism, but which also has at its disposal the power that comes from looking at complex problems from an abstract analytical point of view.
HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation
NASA Technical Reports Server (NTRS)
Sterling, Thomas; Bergman, Larry
2000-01-01
Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.
Simulation and Gaming: Directions, Issues, Ponderables.
ERIC Educational Resources Information Center
Uretsky, Michael
1995-01-01
Discusses the current use of simulation and gaming in a variety of settings. Describes advances in technology that facilitate the use of simulation and gaming, including computer power, computer networks, software, object-oriented programming, video, multimedia, virtual reality, and artificial intelligence. Considers the future use of simulation…
Interfaces for Advanced Computing.
ERIC Educational Resources Information Center
Foley, James D.
1987-01-01
Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…
Riener, Robert
2016-05-31
The Cybathlon is a new kind of championship, where people with physical disabilities compete against each other at tasks of daily life, with the aid of advanced assistive devices including robotic technologies. The first championship will take place at the Swiss Arena Kloten, Zurich, on 8 October 2016. Six disciplines are part of the competition comprising races with powered leg prostheses, powered arm prostheses, functional electrical stimulation driven bikes, powered wheelchairs, powered exoskeletons and brain-computer interfaces. This commentary describes the six disciplines and explains the current technological deficiencies that have to be addressed by the competing teams. These deficiencies at present often lead to disappointment or even rejection of some of the related technologies in daily applications. The Cybathlon aims to promote the development of useful technologies that facilitate the lives of people with disabilities. In the long run, the developed devices should become affordable and functional for all relevant activities in daily life.
Novel Ruggedized Packaging Technology for VCSELs
2017-03-01
Novel Ruggedized Packaging Technology for VCSELs Charlie Kuznia ckuznia@ultracomm-inc.com Ultra Communications, Inc. Vista, CA, USA, 92081...n ac hieve l ow-power, E MI-immune links within hi gh-performance m ilitary computing an d sensor systems. Figure 1. Chip-scale-packaging of
Stakeholder Perceptions of ICT Usage across Management Institutes
ERIC Educational Resources Information Center
Goyal, Ela; Purohit, Seema; Bhagat, Manju
2013-01-01
Information and communication technology (ICT) which includes radio, television and newer digital technology such as computers and the internet, are potentially powerful tools for extending educational opportunities, formal and non-formal, to one and all. It provides opportunities to deploy innovative teaching methodologies and interesting…
2008-04-01
Space GmbH as follows: B. TECHNICAL PRPOPOSA/DESCRIPTION OF WORK Cell: A Revolutionary High Performance Computing Platform On 29 June 2005 [1...IBM has announced that is has partnered with Mercury Computer Systems, a maker of specialized computers . The Cell chip provides massive floating-point...the computing industry away from the traditional processor technology dominated by Intel. While in the past, the development of computing power has
ERIC Educational Resources Information Center
Sardone, Nancy B.
2011-01-01
The confluence of powerful technologies of computers and network connectivity has brought explosive growth to the field of Information Technology (IT). The problem presented in this study is whether the type of learning environment where IT concepts are taught to undergraduates has a relationship to the development of IT fluency and course…
Spring 2006. Industry Study. Manufacturing Industry
2006-01-01
ANALYSIS OF TRENDS Today the U.S. is the global leader in manufacturing innovation and technology . Continued advancements in both computing power and...than ninety percent of all annual U.S. patents as reported by the Department of Commerce. Through innovation and the application of new technology ...mobilization, innovation and technology , the manufacturing transformation, environmental balance, and international travel impressions
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
Deep Learning in Medical Imaging: General Overview
Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae
2017-01-01
The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging. PMID:28670152
Deep Learning in Medical Imaging: General Overview.
Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug
2017-01-01
The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.
Histopathological Image Analysis: A Review
Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent
2010-01-01
Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804
Portable Computer Technology (PCT) Research and Development Program Phase 2
NASA Technical Reports Server (NTRS)
Castillo, Michael; McGuire, Kenyon; Sorgi, Alan
1995-01-01
The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.
NASA Technical Reports Server (NTRS)
1976-01-01
TRW has applied the Apollo checkout procedures to retail-store and bank-transaction systems, as well as to control systems for electric power transmission grids -- reducing the chance of power blackouts. Automatic checkout equipment for Apollo Spacecraft is one of the most complex computer systems in the world. Used to integrate extensive Apollo checkout procedures from manufacture to launch, it has spawned major advances in computer systems technology. Store and bank credit system has caused significant improvement in speed and accuracy of transactions, credit authorization, and inventory control. A similar computer service called "Validata" is used nationwide by airlines, airline ticket offices, car rental agencies, and hotels.
Borresen, Jon; Lynch, Stephen
2012-01-01
In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034
ERIC Educational Resources Information Center
Oklahoma State Board of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
These instructor materials for an aviation maintenance technology course contain five instructional modules. The modules cover the following topics: determining the relationship of voltage, current, resistance, and power in electrical circuits; computing and measuring capacitance and inductance; measuring voltage, current, resistance, and…
NASA Technical Reports Server (NTRS)
1991-01-01
The purpose of the conference was to increase awareness of existing NASA developed technologies that are available for immediate use in the development of new products and processes, and to lay the groundwork for the effective utilization of emerging technologies. There were sessions on the following: Computer technology and software engineering; Human factors engineering and life sciences; Information and data management; Material sciences; Manufacturing and fabrication technology; Power, energy, and control systems; Robotics; Sensors and measurement technology; Artificial intelligence; Environmental technology; Optics and communications; and Superconductivity.
ERIC Educational Resources Information Center
Hu, Qinran; Li, Fangxing; Chen, Chien-fei
2015-01-01
There is a worldwide trend to modernize old power grid infrastructures to form future smart grids, which will achieve efficient, flexible energy consumption by using the latest technologies in communication, computing, and control. Smart grid initiatives are moving power systems curricula toward smart grids. Although the components of smart grids…
Energy Consumption Management of Virtual Cloud Computing Platform
NASA Astrophysics Data System (ADS)
Li, Lin
2017-11-01
For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.
ERIC Educational Resources Information Center
Zadahmad, Manouchehr; Yousefzadehfard, Parisa
2016-01-01
Mobile Cloud Computing (MCC) aims to improve all mobile applications such as m-learning systems. This study presents an innovative method to use web technology and software engineering's best practices to provide m-learning functionalities hosted in a MCC-learning system as service. Components hosted by MCC are used to empower developers to create…
Computers in the English Program: Promises and Pitfalls. New York State English Council Monograph.
ERIC Educational Resources Information Center
Chew, Charles R., Ed.
Since an increasing number of English teachers are being asked to find a place for computers in the English program, this monograph focuses on issues connected to this technology. The first article sets the stage with a discussion of the power and potential of the computer. Other articles focus on the following topics: (1) promises and…
Gammaitoni, Luca; Chiuchiú, D; Madami, M; Carlotti, G
2015-06-05
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
NASA Astrophysics Data System (ADS)
Gammaitoni, Luca; Chiuchiú, D.; Madami, M.; Carlotti, G.
2015-06-01
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2003-01-01
The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Optical interconnection networks for high-performance computing systems
NASA Astrophysics Data System (ADS)
Biberman, Aleksandr; Bergman, Keren
2012-04-01
Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.
Designer drugs: the evolving science of drug discovery.
Wanke, L A; DuBose, R F
1998-07-01
Drug discovery and design are fundamental to drug development. Until recently, most drugs were discovered through random screening or developed through molecular modification. New technologies are revolutionizing this phase of drug development. Rational drug design, using powerful computers and computational chemistry and employing X-ray crystallography, nuclear magnetic resonance spectroscopy, and three-dimensional quantitative structure activity relationship analysis, is creating highly specific, biologically active molecules by virtual reality modeling. Sophisticated screening technologies are eliminating all but the most active lead compounds. These new technologies promise more efficacious, safe, and cost-effective medications, while minimizing drug development time and maximizing profits.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Wan, Kelvin H; Chong, Kelvin K L; Young, Alvin L
2015-12-08
Post-traumatic orbital reconstruction remains a surgical challenge and requires careful preoperative planning, sound anatomical knowledge and good intraoperative judgment. Computer-assisted technology has the potential to reduce error and subjectivity in the management of these complex injuries. A systematic review of the literature was conducted to explore the emerging role of computer-assisted technologies in post-traumatic orbital reconstruction, in terms of functional and safety outcomes. We searched for articles comparing computer-assisted procedures with conventional surgery and studied outcomes on diplopia, enophthalmos, or procedure-related complications. Six observational studies with 273 orbits at a mean follow-up of 13 months were included. Three out of 4 studies reported significantly fewer patients with residual diplopia in the computer-assisted group, while only 1 of the 5 studies reported better improvement in enophthalmos in the assisted group. Types and incidence of complications were comparable. Study heterogeneities limiting statistical comparison by meta-analysis will be discussed. This review highlights the scarcity of data on computer-assisted technology in orbital reconstruction. The result suggests that computer-assisted technology may offer potential advantage in treating diplopia while its role remains to be confirmed in enophthalmos. Additional well-designed and powered randomized controlled trials are much needed.
1992-03-16
34A Hidden U.S. Export: Higher Education ." The WashinQton Post, 16 February 1992, H1 and H4. Brandin , David H., and Michael A. Harrison. The...frequent significant technological change now occurs within the individual person’s working lifespan, life-long education is a necessity to remain...INDUSTRIAL REVOLUTION The phenomenal increase in speed and in raw power of computer processors, the shrinking size and cost of basic computing systems, the
DEVELOPMENT AND APPLICATIONS OF CFD SIMULATIONS SUPPORTING URBAN AIR QUALITY AND HOMELAND SECURITY
Prior to September 11, 2001 developments of Computational Fluid Dynamics (CFD) were begun to support air quality applications. CFD models are emerging as a promising technology for such assessments, in part due to the advancing power of computational hardware and software. CFD si...
Tableau Economique: Teaching Economics with a Tablet Computer
ERIC Educational Resources Information Center
Scott, Robert H., III
2011-01-01
The typical method of instruction in economics is chalk and talk. Economics courses often require writing equations and drawing graphs and charts, which are all best done in freehand. Unlike static PowerPoint presentations, tablet computers create dynamic nonlinear presentations. Wireless technology allows professors to write on their tablets and…
Mobile computing device as tools for college student education: a case on flashcards application
NASA Astrophysics Data System (ADS)
Kang, Congying
2012-04-01
Traditionally, college students always use flash cards as a tool to remember massive knowledge, such as nomenclature, structures, and reactions in chemistry. Educational and information technology have enabled flashcards viewed on computers, like Slides and PowerPoint, works as tunnels of drilling and feedback for the learners. The current generation of students is more capable of information technology and mobile computing devices. For example, they use their Mobile phones much more intensively everyday day. Trends of using Mobile phone as an educational tool is analyzed and a educational technology initiative is proposed, which use Mobile phone flash cards applications to help students learn biology and chemistry. Experiments show that users responded positively to these mobile flash cards.
ERIC Educational Resources Information Center
Bickford, J. H., III
2010-01-01
This paper is based on three beliefs. First, technology can engage and challenge students' thinking. Second, technology can assist students in creating quality work. Finally, computer-generated student-work can be used as educational tools in productive ways that other student-work cannot. This article suggests new ways to use old technologies to…
Enabling technologies for fiber optic sensing
NASA Astrophysics Data System (ADS)
Ibrahim, Selwan K.; Farnan, Martin; Karabacak, Devrez M.; Singer, Johannes M.
2016-04-01
In order for fiber optic sensors to compete with electrical sensors, several critical parameters need to be addressed such as performance, cost, size, reliability, etc. Relying on technologies developed in different industrial sectors helps to achieve this goal in a more efficient and cost effective way. FAZ Technology has developed a tunable laser based optical interrogator based on technologies developed in the telecommunication sector and optical transducer/sensors based on components sourced from the automotive market. Combining Fiber Bragg Grating (FBG) sensing technology with the above, high speed, high precision, reliable quasi distributed optical sensing systems for temperature, pressure, acoustics, acceleration, etc. has been developed. Careful design needs to be considered to filter out any sources of measurement drifts/errors due to different effects e.g. polarization and birefringence, coating imperfections, sensor packaging etc. Also to achieve high speed and high performance optical sensing systems, combining and synchronizing multiple optical interrogators similar to what has been used with computer/processors to deliver super computing power is an attractive solution. This path can be achieved by using photonic integrated circuit (PIC) technology which opens the doors to scaling up and delivering powerful optical sensing systems in an efficient and cost effective way.
Bång, Magnus; Larsson, Anders; Eriksson, Henrik
2003-01-01
In this paper, we present a new approach to clinical workplace computerization that departs from the window-based user interface paradigm. NOSTOS is an experimental computer-augmented work environment designed to support data capture and teamwork in an emergency room. NOSTOS combines multiple technologies, such as digital pens, walk-up displays, headsets, a smart desk, and sensors to enhance an existing paper-based practice with computer power. The physical interfaces allow clinicians to retain mobile paper-based collaborative routines and still benefit from computer technology. The requirements for the system were elicited from situated workplace studies. We discuss the advantages and disadvantages of augmenting a paper-based clinical work environment.
When Worlds Collide: An Augmented Reality Check
ERIC Educational Resources Information Center
Villano, Matt
2008-01-01
The technology is simple: Mobile technologies such as handheld computers and global positioning systems work in sync to create an alternate, hybrid world that mixes virtual characters with the actual physical environment. The result is a digital simulation that offers powerful game-playing opportunities and allows students to become more engaged…
Radiofrequency/Microwave Radiation Biological Effects and Safety Standards: A Review
1994-06-01
reported that a 50 year old woman had developed cataracts after intermittent exposure to a 2.45 GHz microwave oven. The incident power density levels were...include: Survelance, Communications, Command and Control, Intelligence, Signal Processing, Computer Sience and Technology, Electrom Technology, Photoracs and laiity Saences. S* I l I
Computer Simulation and New Ways of Creating Matched-Guise Techniques
ERIC Educational Resources Information Center
Connor, Robert T.
2008-01-01
Matched-guise experiments have passed their 40th year as a powerful attitudinal research tool, and they are becoming more relevant and useful as technology is applied to language research. Combining the specificity of conversation analysis with the generalizability of social psychology research, technological innovations allow the measurement of…
You and Technology, A High School Case Study Text.
ERIC Educational Resources Information Center
Damaskos, Nickander J., Ed.; Smyth, Michael P., Ed.
This second draft of a manuscript for a high school engineering and technology course uses case studies as its format. The principles associated with various engineering problems are presented along with their effects on daily life. Topics include the computer, the automotive power system, satellite communications, the petroleum industry, water…
The Challenge of Computer Furniture.
ERIC Educational Resources Information Center
Dolan, Thomas G.
2003-01-01
Explains that classrooms and school furniture were built for a different era and often do not have sufficient power for technology, discussing what is needed to support modern technology in education. One solution involves modular cabling and furniture that is capable of being rearranged. Currently, there are no comprehensive standards from which…
Embedded 100 Gbps Photonic Components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznia, Charlie
This innovation to fiber optic component technology increases the performance, reduces the size and reduces the power consumption of optical communications within dense network systems, such as advanced distributed computing systems and data centers. VCSEL technology is enabling short-reach (< 100 m) and >100 Gbps optical interconnections over multi-mode fiber in commercial applications.
Building a Better Biology Lab? Testing Tablet PC Technology in a Core Laboratory Course
ERIC Educational Resources Information Center
Pryor, Gregory; Bauer, Vernon
2008-01-01
Tablet PC technology can enliven the classroom environment because it is dynamic, interactive, and "organic," relative to the rigidity of chalkboards, whiteboards, overhead projectors, and PowerPoint presentations. Unlike traditional computers, tablet PCs employ "digital linking," allowing instructors and students to freehand annotate, clarify,…
Using Technology to Build Solar-Powered Drag Racers
ERIC Educational Resources Information Center
Fireman, Jerry
2012-01-01
The Colfax High School (Colfax, California) Design Tech program incorporates both academic instruction and practical use of advanced technology to prepare students for the wide range of occupations that involve working with metal, wood, computers, and electronics. In this article, the author describes how Colfax students applied academic learning,…
Women@Work: Listening to Gendered Relations of Power in Teachers' Talk about New Technologies.
ERIC Educational Resources Information Center
Jenson, Jennifer; Rose, Chloe Brushwood
2003-01-01
Examines teachers' working identities, highlighting gender inequities among teachers, within school systems, and in society, especially in relation to computers. Highlights tensions central to teaching in relation to new technologies, emphasizing gender inequities that structure understandings of teaching. Documents how, for the teachers studied,…
ERIC Educational Resources Information Center
Li, Haiqing
2010-01-01
With rapid advancements in information and communication technologies, computer-mediated communication channels such as email, web, mobile smart-phones with SMS, social networking websites (Facebook), multimedia websites, and OEM devices provide users with multiple technology choices to seek information. However, no study has compared the…
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Jin, Shuangshuang; Chen, Yousu
This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less
Note: The full function test explosive generator.
Reisman, D B; Javedani, J B; Griffith, L V; Ellsworth, G F; Kuklo, R M; Goerz, D A; White, A D; Tallerico, L J; Gidding, D A; Murphy, M J; Chase, J B
2010-03-01
We have conducted three tests of a new pulsed power device called the full function test. These tests represented the culmination of an effort to establish a high energy pulsed power capability based on high explosive pulsed power (HEPP) technology. This involved an extensive computational modeling, engineering, fabrication, and fielding effort. The experiments were highly successful and a new U.S. record for magnetic energy was obtained.
System balance analysis for vector computers
NASA Technical Reports Server (NTRS)
Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.
1975-01-01
The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.
A software control system for the ACTS high-burst-rate link evaluation terminal
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Daugherty, Elaine S.
1991-01-01
Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications.
Planning and Resource Management in an Intelligent Automated Power Management System
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1991-01-01
Power system management is a process of guiding a power system towards the objective of continuous supply of electrical power to a set of loads. Spacecraft power system management requires planning and scheduling, since electrical power is a scarce resource in space. The automation of power system management for future spacecraft has been recognized as an important R&D goal. Several automation technologies have emerged including the use of expert systems for automating human problem solving capabilities such as rule based expert system for fault diagnosis and load scheduling. It is questionable whether current generation expert system technology is applicable for power system management in space. The objective of the ADEPTS (ADvanced Electrical Power management Techniques for Space systems) is to study new techniques for power management automation. These techniques involve integrating current expert system technology with that of parallel and distributed computing, as well as a distributed, object-oriented approach to software design. The focus of the current study is the integration of new procedures for automatically planning and scheduling loads with procedures for performing fault diagnosis and control. The objective is the concurrent execution of both sets of tasks on separate transputer processors, thus adding parallelism to the overall management process.
Performing quantum computing experiments in the cloud
NASA Astrophysics Data System (ADS)
Devitt, Simon J.
2016-09-01
Quantum computing technology has reached a second renaissance in the past five years. Increased interest from both the private and public sector combined with extraordinary theoretical and experimental progress has solidified this technology as a major advancement in the 21st century. As anticipated my many, some of the first realizations of quantum computing technology has occured over the cloud, with users logging onto dedicated hardware over the classical internet. Recently, IBM has released the Quantum Experience, which allows users to access a five-qubit quantum processor. In this paper we take advantage of this online availability of actual quantum hardware and present four quantum information experiments. We utilize the IBM chip to realize protocols in quantum error correction, quantum arithmetic, quantum graph theory, and fault-tolerant quantum computation by accessing the device remotely through the cloud. While the results are subject to significant noise, the correct results are returned from the chip. This demonstrates the power of experimental groups opening up their technology to a wider audience and will hopefully allow for the next stage of development in quantum information technology.
NASA Astrophysics Data System (ADS)
Sabchevski, S.; Idehara, T.; Damyanova, M.; Zhelyazkov, I.; Balabanova, E.; Vasileva, E.
2018-03-01
Gyrotrons are the most powerful sources of CW coherent radiation in the sub-THz and THz frequency bands. In recent years, they have demonstrated a remarkable potential for bridging the so-called THz-gap in the electromagnetic spectrum and opened the road to many novel applications of the terahertz waves. Among them are various advanced spectroscopic techniques (e.g., ESR and DNP-NMR), plasma physics and fusion research, materials processing and characterization, imaging and inspection, new medical technologies and biological studies. In this paper, we review briefly the current status of the research in this broad field and present our problem-oriented software packages developed recently for numerical analysis, computer-aided design (CAD) and optimization of gyrotrons.
Issues in the Convergence of Control with Communication and Computation
2004-10-04
Library/Upload/116/Cal1.doc. [42] M. H. Shwehdi and A. Z. Khan, “A power line data communication interface using spread spectrum technology in home ... automation ,” IEEE Transactions on Power Delivery, vol. 11, pp. 1232–1237, July 1996. ISSN: 0885-8977. [43] R. G. Olsen, “Technical considerations for
Power source evaluation capabilities at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doughty, D.H.; Butler, P.C.
1996-04-01
Sandia National Laboratories maintains one of the most comprehensive power source characterization facilities in the U.S. National Laboratory system. This paper describes the capabilities for evaluation of fuel cell technologies. The facility has a rechargeable battery test laboratory and a test area for performing nondestructive and functional computer-controlled testing of cells and batteries.
Information Technology - Its Impact on Decision-Making.
ERIC Educational Resources Information Center
Hammer, Carl
Electronic systems of the future are bound to be larger, faster, and more reliable. They will furnish management with uninterrupted services in a real-time mode for practically all applications. In short, they will provide computing power as a utility company of today provides electric power. But the most spectacular advance is likely to be the…
The World as Viewed by and with Unpaired Electrons
Eaton, Sandra S.; Eaton, Gareth R.
2012-01-01
Recent advances in electron paramagnetic resonance (EPR) include capabilities for applications to areas as diverse as archeology, beer shelf life, biological structure, dosimetry, in vivo imaging, molecular magnets, and quantum computing. Enabling technologies include multifrequency continuous wave, pulsed, and rapid scan EPR. Interpretation is enhanced by increasingly powerful computational models. PMID:22975244
ERIC Educational Resources Information Center
Abramovich, Sergei; Pieper, Anne
1996-01-01
Describes the use of manipulatives for solving simple combinatorial problems which can lead to the discovery of recurrence relations for permutations and combinations. Numerical evidence and visual imagery generated by a computer spreadsheet through modeling these relations can enable students to experience the ease and power of combinatorial…
Closing the Gap--Information Systems Curriculum and Changing Global Market
ERIC Educational Resources Information Center
Henson, Kerry; Kamal, Mustafa
2010-01-01
The power of outsourcing basic computing technology such as computer programming, database design, customer service operations and system development, to mention a few have changed the conditions of employment in IT. Many of the projects that went off-shore did not perform well due to failure to consider important factors in business dimensions.
Instructional Computer Programs and the Phonological Deficits of Dyslexic Children
ERIC Educational Resources Information Center
Cammarata, Lisa
2006-01-01
The 21st century is a time to contemplate the power of the technological advances that have occurred today. Computers have become idea engines- a tool used for thinking, performing, processing, and instructing people. No one understands or appreciates this phenomenon more than children suffering with dyslexia. These children's ability to learn or…
ERIC Educational Resources Information Center
White, Su
2007-01-01
Computer technology has been harnessed for education in UK universities ever since the first computers for research were installed at 10 selected sites in 1957. Subsequently, real costs have fallen dramatically. Processing power has increased; network and communications infrastructure has proliferated, and information has become unimaginably…
Power monitoring and control for large scale projects: SKA, a case study
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis
2016-07-01
Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.
High-resolution PET [Positron Emission Tomography] for Medical Science Studies
DOE R&D Accomplishments Database
Budinger, T. F.; Derenzo, S. E.; Huesman, R. H.; Jagust, W. J.; Valk, P. E.
1989-09-01
One of the unexpected fruits of basic physics research and the computer revolution is the noninvasive imaging power available to today's physician. Technologies that were strictly the province of research scientists only a decade or two ago now serve as the foundations for such standard diagnostic tools as x-ray computer tomography (CT), magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), ultrasound, single photon emission computed tomography (SPECT), and positron emission tomography (PET). Furthermore, prompted by the needs of both the practicing physician and the clinical researcher, efforts to improve these technologies continue. This booklet endeavors to describe the advantages of achieving high resolution in PET imaging.
Multi-kw dc power distribution system study program
NASA Technical Reports Server (NTRS)
Berkery, E. A.; Krausz, A.
1974-01-01
The first phase of the Multi-kw dc Power Distribution Technology Program is reported and involves the test and evaluation of a technology breadboard in a specifically designed test facility according to design concepts developed in a previous study on space vehicle electrical power processing, distribution, and control. The static and dynamic performance, fault isolation, reliability, electromagnetic interference characterisitics, and operability factors of high distribution systems were studied in order to gain a technology base for the use of high voltage dc systems in future aerospace vehicles. Detailed technical descriptions are presented and include data for the following: (1) dynamic interactions due to operation of solid state and electromechanical switchgear; (2) multiplexed and computer controlled supervision and checkout methods; (3) pulse width modulator design; and (4) cable design factors.
Autonomous self-powered structural health monitoring system
NASA Astrophysics Data System (ADS)
Qing, Xinlin P.; Anton, Steven R.; Zhang, David; Kumar, Amrita; Inman, Daniel J.; Ooi, Teng K.
2010-03-01
Structural health monitoring technology is perceived as a revolutionary method of determining the integrity of structures involving the use of multidisciplinary fields including sensors, materials, system integration, signal processing and interpretation. The core of the technology is the development of self-sufficient systems for the continuous monitoring, inspection and damage detection of structures with minimal labor involvement. A major drawback of the existing technology for real-time structural health monitoring is the requirement for external electrical power input. For some applications, such as missiles or combat vehicles in the field, this factor can drastically limit the use of the technology. Having an on-board electrical power source that is independent of the vehicle power system can greatly enhance the SHM system and make it a completely self-contained system. In this paper, using the SMART layer technology as a basis, an Autonomous Self-powered (ASP) Structural Health Monitoring (SHM) system has been developed to solve the major challenge facing the transition of SHM systems into field applications. The architecture of the self-powered SHM system was first designed. There are four major components included in the SHM system: SMART Layer with sensor network, low power consumption diagnostic hardware, rechargeable battery with energy harvesting device, and host computer with supporting software. A prototype of the integrated self-powered active SHM system was built for performance and functionality testing. Results from the evaluation tests demonstrated that a fully charged battery system is capable of powering the SHM system for active scanning up to 10 hours.
Television broadcast from space systems: Technology, costs
NASA Technical Reports Server (NTRS)
Cuccia, C. L.
1981-01-01
Broadcast satellite systems are described. The technologies which are unique to both high power broadcast satellites and small TV receive-only earth terminals are also described. A cost assessment of both space and earth segments is included and appendices present both a computer model for satellite cost and the pertinent reported experience with the Japanese BSE.
Exploring Competency Development with Mobile Devices
ERIC Educational Resources Information Center
DiGiuseppe, Maurice; Partosoedarso, Elita; Van Oostveen, Roland; Desjardins, Francois
2013-01-01
Computer-based technologies have been used in the field of education for over thirty years. However, more recently, powerful and more affordable mobile technologies are becoming popular in everyday life and the education system. This paper reports on an online survey of student body in a university in Ontario, Canada focused on the use of a wide…
Applying the Multisim Technology to Teach the Course of High Frequency Power Amplifier
ERIC Educational Resources Information Center
Lv, Gang; Xue, Yuan-Sheng
2011-01-01
As one important professional base course in the electric information specialty, the course of "high frequency electronic circuit" has strong theoretical characteristic and abstract content. To enhance the teaching quality of this course, the computer simulation technology based on Multisim is introduced into the teaching of "high…
The State of Simulations: Soft-Skill Simulations Emerge as a Powerful New Form of E-Learning.
ERIC Educational Resources Information Center
Aldrich, Clark
2001-01-01
Presents responses of leaders from six simulation companies about challenges and opportunities of soft-skills simulations in e-learning. Discussion includes: evaluation metrics; role of subject matter experts in developing simulations; video versus computer graphics; technology needed to run simulations; technology breakthroughs; pricing;…
TheBrain Technologies Corporation: Collapsing the Time to Knowledge.
ERIC Educational Resources Information Center
Misek, Marla
2003-01-01
TheBrain was created to take advantage of the most powerful information processor in existence - the human mind. Explains products of TheBrain Technologies Corporation,, which has developed computer interfaces to help individual users and corporations organize information in ways that make sense to them in the proper context. Describes a…
Learning through Information Communication Technology: Critical Perspectives
ERIC Educational Resources Information Center
Severinsen, G.
2003-01-01
Technology in secondary schools has become of increasing interest as the power of the microchip has developed. For the students of Mathematics, computers and handheld graphic calculators need to be accessible to all. They are relevant to the needs of the students' courses and to support and develop their Mathematical learning (Smith, 1997).…
Earth Science Learning in SMALLab: A Design Experiment for Mixed Reality
ERIC Educational Resources Information Center
Birchfield, David; Megowan-Romanowicz, Colleen
2009-01-01
Conversational technologies such as email, chat rooms, and blogs have made the transition from novel communication technologies to powerful tools for learning. Currently virtual worlds are undergoing the same transition. We argue that the next wave of innovation is at the level of the computer interface, and that mixed-reality environments offer…
NASA Technical Reports Server (NTRS)
1990-01-01
A brief but comprehensive review is given of the technical accomplishments of the NASA Lewis Research Center during the past year. Topics covered include instrumentation and controls technology; internal fluid dynamics; aerospace materials, structures, propulsion, and electronics; space flight systems; cryogenic fluids; Space Station Freedom systems engineering, photovoltaic power module, electrical systems, and operations; and engineering and computational support.
Factors Affecting Pre-Service TESOL Teachers' Attitudes towards Using CD-ROM Dictionary
ERIC Educational Resources Information Center
Issa, Jinan Hatem; Jamil, Hazri
2011-01-01
Rapid technological advances in communication technologies and computational power are altering the nature of knowledge, skills, talents and the know-how of individuals. A CD-ROM dictionary is an interesting and effective teaching tool, which captures pre-service teachers' interest and does much more than just translates especially with the…
Videodisc/Microcomputer Technology in Wildland Fire Behavior Training
M. J. Jenkins; K.Y. Matsumoto-Grah
1987-01-01
Interactive video is a powerful medium, bringing together the emotional impact of video and film and the interactive capabilities of the computer. Interactive videodisc instruction can be used as a tutorial, for drill and practice and in simulations, as well as for information storage. Videodisc technology is being used in industrial, military and medical applications...
ERIC Educational Resources Information Center
Reich, Justin; Daccord, Thomas
2009-01-01
Used wisely, academic technology empowers students to take responsibility for their own learning. "In Leonardo's Laptop," Ben Shneiderman provides teachers with a powerful framework, "Collect-Relate-Create-Donate" (CRCD), for designing student-centered learning opportunities using computers. Shneiderman developed his model by…
Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozacik, Stephen
Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.
Advancing crime scene computer forensics techniques
NASA Astrophysics Data System (ADS)
Hosmer, Chet; Feldman, John; Giordano, Joe
1999-02-01
Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.
Automated Power Systems Management (APSM)
NASA Technical Reports Server (NTRS)
Bridgeforth, A. O.
1981-01-01
A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.
Automation technology for aerospace power management
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1982-01-01
The growing size and complexity of spacecraft power systems coupled with limited space/ground communications necessitate increasingly automated onboard control systems. Research in computer science, particularly artificial intelligence has developed methods and techniques for constructing man-machine systems with problem-solving expertise in limited domains which may contribute to the automation of power systems. Since these systems perform tasks which are typically performed by human experts they have become known as Expert Systems. A review of the current state of the art in expert systems technology is presented, and potential applications in power systems management are considered. It is concluded that expert systems appear to have significant potential for improving the productivity of operations personnel in aerospace applications, and in automating the control of many aerospace systems.
Intelligent tutoring systems research in the training systems division: Space applications
NASA Technical Reports Server (NTRS)
Regian, J. Wesley
1988-01-01
Computer-Aided Instruction (CAI) is a mature technology used to teach students in a wide variety of domains. The introduction of Artificial Intelligence (AI) technology of the field of CAI has prompted research and development efforts in an area known as Intelligent Computer-Aided Instruction (ICAI). In some cases, ICAI has been touted as a revolutionary alternative to traditional CAI. With the advent of powerful, inexpensive school computers, ICAI is emerging as a potential rival to CAI. In contrast to this, one may conceive of Computer-Based Training (CBT) systems as lying along a continuum which runs from CAI to ICAI. Although the key difference between the two is intelligence, there is not commonly accepted definition of what constitutes an intelligent instructional system.
Enabling Earth Science: The Facilities and People of the NCCS
NASA Technical Reports Server (NTRS)
2002-01-01
The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
Intelligent Computer Assisted Instruction (ICAI): Formative Evaluation of Two Systems
1986-03-01
appreciation .’.,-* for the power of computer technology. Interpretati on Yale students are a strikingly high performing group by traditional academic ...COMPUTER ASSISTED INSTRUCTION April 1984 - August 1985 (ICAI): FORMATIVE EVALUATION OF TWO SYSTEMS 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(*) S...956881 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK AREA & WORK UNIT NUMBERS Jet Propulsion Laboratory 2Q263743A794
Opportunities for nonvolatile memory systems in extreme-scale high-performance computing
Vetter, Jeffrey S.; Mittal, Sparsh
2015-01-12
For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less
High-power microwave LDMOS transistors for wireless data transmission technologies (Review)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, E. V., E-mail: E.Kouzntsov@tcen.ru; Shemyakin, A. V.
The fields of the application, structure, fabrication, and packaging technology of high-power microwave LDMOS transistors and the main advantages of these devices were analyzed. Basic physical parameters and some technology factors were matched for optimum device operation. Solid-state microwave electronics has been actively developed for the last 10-15 years. Simultaneously with improvement of old devices, new devices and structures are actively being adopted and developed and new semiconductor materials are being commercialized. Microwave LDMOS technology is in demand in such fields as avionics, civil and military radars, repeaters, base stations of cellular communication systems, television and broadcasting transmitters, and transceiversmore » for high-speed wireless computer networks (promising Wi-Fi and Wi-Max standards).« less
NASA Technical Reports Server (NTRS)
Kovarik, Madeline
1993-01-01
Intelligent computer aided training systems hold great promise for the application of this technology to mainstream education and training. Yet, this technology, which holds such a vast potential impact for the future of education and training, has had little impact beyond the enclaves of government research labs. This is largely due to the inaccessibility of the technology to those individuals in whose hands it can have the greatest impact, teachers and educators. Simply throwing technology at an educator and expecting them to use it as an effective tool is not the answer. This paper provides a background into the use of technology as a training tool. MindLink, developed by HyperTech Systems, provides trainers with a powerful rule-based tool that can be integrated directly into a Windows application. By embedding expert systems technology it becomes more accessible and easier to master.
Embedded Systems and TensorFlow Frameworks as Assistive Technology Solutions.
Mulfari, Davide; Palla, Alessandro; Fanucci, Luca
2017-01-01
In the field of deep learning, this paper presents the design of a wearable computer vision system for visually impaired users. The Assistive Technology solution exploits a powerful single board computer and smart glasses with a camera in order to allow its user to explore the objects within his surrounding environment, while it employs Google TensorFlow machine learning framework in order to real time classify the acquired stills. Therefore the proposed aid can increase the awareness of the explored environment and it interacts with its user by means of audio messages.
Science and Technology Review, January-February 1997
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Table of contents: accelerators at Livermore; the B-Factory and the Big Bang; assessing exposure to radiation; next generation of computer storage; and a powerful new tool to detect clandestine nuclear tests.
Water Power Data and Tools | Water Power | NREL
computer modeling tools and data with state-of-the-art design and analysis. Photo of a buoy designed around National Wind Technology Center's Information Portal as well as a WEC-Sim fact sheet. WEC Design Response Toolbox The WEC Design Response Toolbox provides extreme response and fatigue analysis tools specifically
A Long-Term Model for the Curriculum of Training for an Electric-Power Specialist
ERIC Educational Resources Information Center
Venikov, V. A.
1978-01-01
Long-term planning for professional training of electric-power specialists in Russia will have to (1) recognize the need for specialists to adapt to unforeseen developments in the field, (2) include new mathematics, physics, and computer technology, and (3) be prepared for changes in methods of production and transformation of energy. (AV)
Learning technologies and the cyber-science classroom
NASA Astrophysics Data System (ADS)
Houlihan, Gerard
Access to computer and communication technology has long been regarded `part-and-parcel' of a good education. No educator can afford to ignore the profound impact of learning technologies on the way we teach science, nor fail to acknowledge that information literacy and computing skills will be fundamental to the practice of science in the next millennium. Nevertheless, there is still confusion concerning what technologies educators should employ in teaching science. Furthermore, a lack of knowledge combined with the pressures to be `seen' utilizing technology has lead some schools to waste scarce resources in a `grab-bag' attitude towards computers and technology. Such popularized `wish lists' can only drive schools to accumulate expensive equipment for no real learning purpose. In the future educators will have to reconsider their curriculum and pedagogy with a focus on the learning environment before determining what appropriate computing resources to acquire. This will be fundamental to the capabilities of science classrooms to engage with cutting-edge issues in science. This session will demonstrate the power of a broad range of learning technologies to enhance science education. The aim is to explore classroom possibilities as well as to provide a basic introduction to technical aspects of various software and hardware applications, including robotics and dataloggers and simulation software.
Introduction to Multimedia in Instruction. An IAT Technology Primer.
ERIC Educational Resources Information Center
Oblinger, Diana
Multimedia allows computing to move from text and data into the realm of graphics, sound, images, and full-motion video, thus allowing both students and teachers to use the power of computers in new ways. Key elements of multimedia are natural presentation of information and non-linear navigation through applications for access to information on…
The world as viewed by and with unpaired electrons.
Eaton, Sandra S; Eaton, Gareth R
2012-10-01
Recent advances in electron paramagnetic resonance (EPR) include capabilities for applications to areas as diverse as archeology, beer shelf life, biological structure, dosimetry, in vivo imaging, molecular magnets, and quantum computing. Enabling technologies include multifrequency continuous wave, pulsed, and rapid scan EPR. Interpretation is enhanced by increasingly powerful computational models. Copyright © 2012 Elsevier Inc. All rights reserved.
Application of evolutionary computation in ECAD problems
NASA Astrophysics Data System (ADS)
Lee, Dae-Hyun; Hwang, Seung H.
1998-10-01
Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.
Radiation Tolerant, FPGA-Based SmallSat Computer System
NASA Technical Reports Server (NTRS)
LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew
2015-01-01
The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.
CUDA GPU based full-Stokes finite difference modelling of glaciers
NASA Astrophysics Data System (ADS)
Brædstrup, C. F.; Egholm, D. L.
2012-04-01
Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.
ERIC Educational Resources Information Center
Peng, Hsinyi; Chuang, Po-Ya; Hwang, Gwo-Jen; Chu, Hui-Chun; Wu, Ting-Ting; Huang, Shu-Xian
2009-01-01
Researchers have conducted various studies on applying wireless communication and ubiquitous computing technologies to education, so that the technologies can provide learners and educators with more active and adaptive support. This study proposes a Ubiquitous Performance-support System (UPSS) that can facilitate the seamless use of powerful new…
Ubiquitous Learning (U-Learning) Awareness among the Tuticorin District B.Ed., Trainees
ERIC Educational Resources Information Center
Thiyagu, K.
2011-01-01
The rapid development of information and communication technologies during the past two decades has had many points of contact with education and training. The use of technology in colleges and schools is not new. Teacher training often includes computer-assisted learning along with other multimedia presentation techniques. The power of ICT over…
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Science& Technology Review June 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMahon, D
This month's issue has the following articles: (1) Livermore's Three-Pronged Strategy for High-Performance Computing, Commentary by Dona Crawford; (2) Riding the Waves of Supercomputing Technology--Livermore's Computation Directorate is exploiting multiple technologies to ensure high-performance, cost-effective computing; (3) Chromosome 19 and Lawrence Livermore Form a Long-Lasting Bond--Lawrence Livermore biomedical scientists have played an important role in the Human Genome Project through their long-term research on chromosome 19; (4) A New Way to Measure the Mass of Stars--For the first time, scientists have determined the mass of a star in isolation from other celestial bodies; and (5) Flexibly Fueled Storage Tank Bringsmore » Hydrogen-Powered Cars Closer to Reality--Livermore's cryogenic hydrogen fuel storage tank for passenger cars of the future can accommodate three forms of hydrogen fuel separately or in combination.« less
Bång, Magnus; Larsson, Anders; Eriksson, Henrik
2003-01-01
In this paper, we present a new approach to clinical workplace computerization that departs from the window–based user interface paradigm. NOSTOS is an experimental computer–augmented work environment designed to support data capture and teamwork in an emergency room. NOSTOS combines multiple technologies, such as digital pens, walk–up displays, headsets, a smart desk, and sensors to enhance an existing paper–based practice with computer power. The physical interfaces allow clinicians to retain mobile paper–based collaborative routines and still benefit from computer technology. The requirements for the system were elicited from situated workplace studies. We discuss the advantages and disadvantages of augmenting a paper–based clinical work environment. PMID:14728131
Direct Methanol Fuel Cell Power Supply For All-Day True Wireless Mobile Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian Wells
PolyFuel has developed state-of-the-art portable fuel cell technology for the portable computing market. A novel approach to passive water recycling within the MEA has led to significant system simplification and size reduction. Miniature stack technology with very high area utilization and minimalist seals has been developed. A highly integrated balance of plant with very low parasitic losses has been constructed around the new stack design. Demonstration prototype systems integrated with laptop computers have been shown in recent months to leading OEM computer manufacturers. PolyFuel intends to provide this technology to its customers as a reference design as a means ofmore » accelerating the commercialization of portable fuel cell technology. The primary goal of the project was to match the energy density of a commercial lithium ion battery for laptop computers. PolyFuel made large strides against this goal and has now demonstrated 270 Wh/liter compared with lithium ion energy densities of 300 Wh/liter. Further, more incremental, improvements in energy density are envisioned with an additional 20-30% gains possible in each of the next two years given further research and development.« less
NASA Astrophysics Data System (ADS)
Chang, S. S. L.
State of the art technology in circuits, fields, and electronics is discussed. The principles and applications of these technologies to industry, digital processing, microwave semiconductors, and computer-aided design are explained. Important concepts and methodologies in mathematics and physics are reviewed, and basic engineering sciences and associated design methods are dealt with, including: circuit theory and the design of magnetic circuits and active filter synthesis; digital signal processing, including FIR and IIR digital filter design; transmission lines, electromagnetic wave propagation and surface acoustic wave devices. Also considered are: electronics technologies, including power electronics, microwave semiconductors, GaAs devices, and magnetic bubble memories; digital circuits and logic design.
Flight control systems development of highly maneuverable aircraft technology /HiMAT/ vehicle
NASA Technical Reports Server (NTRS)
Petersen, K. L.
1979-01-01
The highly maneuverable aircraft technology (HiMAT) program was conceived to demonstrate advanced technology concepts through scaled-aircraft flight tests using a remotely piloted technique. Closed-loop primary flight control is performed from a ground-based cockpit, utilizing a digital computer and up/down telemetry links. A backup flight control system for emergency operation resides in an onboard computer. The onboard systems are designed to provide fail-operational capabilities and utilize two microcomputers, dual uplink receiver/decoders, and redundant hydraulic actuation and power systems. This paper discusses the design and validation of the primary and backup digital flight control systems as well as the unique pilot and specialized systems interfaces.
Evolutionary growth for Space Station Freedom electrical power system
NASA Technical Reports Server (NTRS)
Marshall, Matthew Fisk; Mclallin, Kerry; Zernic, Mike
1989-01-01
Over an operational lifetime of at least 30 yr, Space Station Freedom will encounter increased Space Station user requirements and advancing technologies. The Space Station electrical power system is designed with the flexibility to accommodate these emerging technologies and expert systems and is being designed with the necessary software hooks and hardware scars to accommodate increased growth demand. The electrical power system is planned to grow from the initial 75 kW up to 300 kW. The Phase 1 station will utilize photovoltaic arrays to produce the electrical power; however, for growth to 300 kW, solar dynamic power modules will be utilized. Pairs of 25 kW solar dynamic power modules will be added to the station to reach the power growth level. The addition of solar dynamic power in the growth phase places constraints in the initial Space Station systems such as guidance, navigation, and control, external thermal, truss structural stiffness, computational capabilities and storage, which must be planned-in, in order to facilitate the addition of the solar dynamic modules.
2001-08-01
Apollo-era technology spurred the development of cordless products that we take for granted everyday. In the 1960s, NASA asked Black Decker to develop a special drill that would be powerful enough to cut through hard layers of the lunar surface and be lightweight, compact, and operate under its own power source, allowing Apollo astronauts to collect lunar samples further away from the Lunar Experiment Module. In response, Black Decker developed a computer program that analyzed and optimized drill motor operations. From their analysis, engineers were able to design a motor that was powerful yet required minimal battery power to operate. Since those first days of cordless products, Black Decker has continued to refine this technology and they now sell their rechargeable products worldwide (i.e. the Dustbuster, cordless tools for home and industrial use, and medical tools.)
NASA Technical Reports Server (NTRS)
2001-01-01
Apollo-era technology spurred the development of cordless products that we take for granted everyday. In the 1960s, NASA asked Black Decker to develop a special drill that would be powerful enough to cut through hard layers of the lunar surface and be lightweight, compact, and operate under its own power source, allowing Apollo astronauts to collect lunar samples further away from the Lunar Experiment Module. In response, Black Decker developed a computer program that analyzed and optimized drill motor operations. From their analysis, engineers were able to design a motor that was powerful yet required minimal battery power to operate. Since those first days of cordless products, Black Decker has continued to refine this technology and they now sell their rechargeable products worldwide (i.e. the Dustbuster, cordless tools for home and industrial use, and medical tools.)
Medical imaging and registration in computer assisted surgery.
Simon, D A; Lavallée, S
1998-09-01
Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.
1996 Laboratory directed research and development annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyers, C.E.; Harvey, C.L.; Lopez-Andreas, L.M.
This report summarizes progress from the Laboratory Directed Research and Development (LDRD) program during fiscal year 1996. In addition to a programmatic and financial overview, the report includes progress reports from 259 individual R&D projects in seventeen categories. The general areas of research include: engineered processes and materials; computational and information sciences; microelectronics and photonics; engineering sciences; pulsed power; advanced manufacturing technologies; biomedical engineering; energy and environmental science and technology; advanced information technologies; counterproliferation; advanced transportation; national security technology; electronics technologies; idea exploration and exploitation; production; and science at the interfaces - engineering with atoms.
Eastern Wind Data Set | Grid Modernization | NREL
cell was computed by combining these data sets with a composite turbine power curve. Wind power plants wind speed at the site. Adjustments were made for model biases, wake losses, wind gusts, turbine and conversion was also updated to better reflect future wind turbine technology. The 12-hour discontinuity was
ERIC Educational Resources Information Center
FitzPatrick, Sarah B.; Faux, Russell
During the last decade, U.S. K-12 schools have approximately tripled their spending on increasingly powerful computers, expanded network access, and novel computer applications. The number of questions being asked by educators, policymakers, and the general public about the extent to which these technologies are being used in classrooms, for what…
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
NASA Technical Reports Server (NTRS)
Litchford, Ron; Robertson, Tony; Hawk, Clark; Turner, Matt; Koelfgen, Syri
1999-01-01
This presentation discusses the use of magnetic flux compression for space flight applications as a propulsion and other power applications. The qualities of this technology that make it suitable for spaceflight propulsion and power, are that it has high power density, it can give multimegawatt energy bursts, and terawatt power bursts, it can produce the pulse power for low impedance dense plasma devices (e.g., pulse fusion drivers), and it can produce direct thrust. The issues of a metal vs plasma armature are discussed, and the requirements for high energy output, and fast pulse rise time requires a high speed armature. The plasma armature enables repetitive firing capabilities. The issues concerning the high temperature superconductor stator are also discussed. The concept of the radial mode pulse power generator is described. The proposed research strategy combines the use of computational modeling (i.e., magnetohydrodynamic computations, and finite element modeling) and laboratory experiments to create a demonstration device.
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
NASA Lewis Stirling SPRE testing and analysis with reduced number of cooler tubes
NASA Technical Reports Server (NTRS)
Wong, Wayne A.; Cairelli, James E.; Swec, Diane M.; Doeberling, Thomas J.; Lakatos, Thomas F.; Madi, Frank J.
1992-01-01
Free-piston Stirling power converters are candidates for high capacity space power applications. The Space Power Research Engine (SPRE), a free-piston Stirling engine coupled with a linear alternator, is being tested at the NASA Lewis Research Center in support of the Civil Space Technology Initiative. The SPRE is used as a test bed for evaluating converter modifications which have the potential to improve the converter performance and for validating computer code predictions. Reducing the number of cooler tubes on the SPRE has been identified as a modification with the potential to significantly improve power and efficiency. Experimental tests designed to investigate the effects of reducing the number of cooler tubes on converter power, efficiency and dynamics are described. Presented are test results from the converter operating with a reduced number of cooler tubes and comparisons between this data and both baseline test data and computer code predictions.
Thermal and Power Challenges in High Performance Computing Systems
NASA Astrophysics Data System (ADS)
Natarajan, Venkat; Deshpande, Anand; Solanki, Sudarshan; Chandrasekhar, Arun
2009-05-01
This paper provides an overview of the thermal and power challenges in emerging high performance computing platforms. The advent of new sophisticated applications in highly diverse areas such as health, education, finance, entertainment, etc. is driving the platform and device requirements for future systems. The key ingredients of future platforms are vertically integrated (3D) die-stacked devices which provide the required performance characteristics with the associated form factor advantages. Two of the major challenges to the design of through silicon via (TSV) based 3D stacked technologies are (i) effective thermal management and (ii) efficient power delivery mechanisms. Some of the key challenges that are articulated in this paper include hot-spot superposition and intensification in a 3D stack, design/optimization of thermal through silicon vias (TTSVs), non-uniform power loading of multi-die stacks, efficient on-chip power delivery, minimization of electrical hotspots etc.
The NASA computer aided design and test system
NASA Technical Reports Server (NTRS)
Gould, J. M.; Juergensen, K.
1973-01-01
A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.
Progress in a novel architecture for high performance processing
NASA Astrophysics Data System (ADS)
Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin
2018-04-01
The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).
NASA Astrophysics Data System (ADS)
Li, Haiqing; Chatterjee, Samir
With rapid advances in information and communication technology, computer-mediated communication (CMC) technologies are utilizing multiple IT platforms such as email, websites, cell-phones/PDAs, social networking sites, and gaming environments. However, no studies have compared the effectiveness of a persuasive system using such alternative channels and various persuasive techniques. Moreover, how affective computing impacts the effectiveness of persuasive systems is not clear. This study proposes (1) persuasive technology channels in combination with persuasive strategies will have different persuasive effectiveness; (2) Adding positive emotion to a message that leads to a better overall user experience could increase persuasive effectiveness. The affective computing or emotion information was added to the experiment using emoticons. The initial results of a pilot study show that computer-mediated communication channels along with various persuasive strategies can affect the persuasive effectiveness to varying degrees. These results also shows that adding a positive emoticon to a message leads to a better user experience which increases the overall persuasive effectiveness of a system.
ERIC Educational Resources Information Center
Dunst, Carl J.; Trivette, Carol M.; Hamby, Deborah W.; Simkus, Andrew
2013-01-01
Findings from a meta-analysis of studies investigating the use of five different assistive technology devices (switch interfaces, powered mobility, computers, augmentative communication, weighted/pressure vests) with young children with disabilities are reported. One hundred and nine studies including 1,342 infants, toddlers, and preschoolers were…
ERIC Educational Resources Information Center
Su, King-Dow
2008-01-01
The purpose of this study was to evaluate the instructional effects of using animations, static figures, PowerPoint bulletins, and e-plus software as chemistry texts with the aid of computer-based technology. This study analyzed the characteristics of students involved in three multimedia courses and their achievement and attitude toward chemistry…
The Online Learning Imperative: A Solution to Three Looming Crises in Education. Issue Brief
ERIC Educational Resources Information Center
Wise, Bob
2010-01-01
The current process and infrastructure for educating students in this country cannot sustain itself any longer. In just about every other facet of society, at work and at home, technology has transformed the way Americans go about their lives. Yet schools have been slow to embrace the transformative power of technology. Although computers are…
Using Technology to Optimize and Generalize: The Least-Squares Line
ERIC Educational Resources Information Center
Burke, Maurice J.; Hodgson, Ted R.
2007-01-01
With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…
ERIC Educational Resources Information Center
Abramovich, S.
2014-01-01
The availability of sophisticated computer programs such as "Wolfram Alpha" has made many problems found in the secondary mathematics curriculum somewhat obsolete for they can be easily solved by the software. Against this background, an interplay between the power of a modern tool of technology and educational constraints it presents is…
Adolescent's Perceptions of Deviance When Using Technology: The Approaching Post-Typographic Culture
ERIC Educational Resources Information Center
Daniel, Annie J.
2005-01-01
Background: Deviant behavior on the computer and the Internet is rising as technology use increases (Hollinger, 1996b ; Power, 2000; Vatis, 2000). Recently, there has been an increase in the number of high tech crimes by adolescents. According to the U.S. Department of Justice (2005), teens have gone high tech with old fashion bullying, stalking…
A computer-based specification methodology
NASA Technical Reports Server (NTRS)
Munck, Robert G.
1986-01-01
Standard practices for creating and using system specifications are inadequate for large, advanced-technology systems. A need exists to break away from paper documents in favor of documents that are stored in computers and which are read and otherwise used with the help of computers. An SADT-based system, running on the proposed Space Station data management network, could be a powerful tool for doing much of the required technical work of the Station, including creating and operating the network itself.
Surface texture and hardness of dental alloys processed by alternative technologies
NASA Astrophysics Data System (ADS)
Porojan, Liliana; Savencu, Cristina E.; Topală, Florin I.; Porojan, Sorin D.
2017-08-01
Technological developments have led to the implementation of novel digitalized manufacturing methods for the production of metallic structures in prosthetic dentistry. These technologies can be classified as based on subtractive manufacturing, assisted by computer-aided design/computer-aided manufacturing (CAD/CAM) systems, or on additive manufacturing (AM), such as the recently developed laser-based methods. The aim of the study was to assess the surface texture and hardness of metallic structures for dental restorations obtained by alternative technologies: conventional casting (CST), computerized milling (MIL), AM power bed fusion methods, respective selective laser melting (SLM) and selective laser sintering (SLS). For the experimental analyses metallic specimens made of Co-Cr dental alloys were prepared as indicated by the manufacturers. The specimen structure at the macro level was observed by an optical microscope and micro-hardness was measured in all substrates. Metallic frameworks obtained by AM are characterized by increased hardness, depending also on the surface processing. The formation of microstructural defects can be better controlled and avoided during SLM and MIL process. Application of power bed fusion techniques, like SLS and SLM, is currently a challenge in dental alloys processing.
Overview of NASA Lewis Research Center free-piston Stirling engine activities
NASA Technical Reports Server (NTRS)
Slaby, J. G.
1984-01-01
A generic free-piston Stirling technology project is being conducted to develop technologies generic to both space power and terrestrial heat pump applications in a cooperative, cost-shared effort. The generic technology effort includes extensive parametric testing of a 1 kW free-piston Stirling engine (RE-1000), development of a free-piston Stirling performance computer code, design and fabrication under contract of a hydraulic output modification for RE-1000 engine tests, and a 1000-hour endurance test, under contract, of a 3 kWe free-piston Stirling/alternator engine. A newly initiated space power technology feasibility demonstration effort addresses the capability of scaling a free-piston Stirling/alternator system to about 25 kWe; developing thermodynamic cycle efficiency or equal to 70 percent of Carnot at temperature ratios in the order of 1.5 to 2.0; achieving a power conversion unit specific weight of 6 kg/kWe; operating with noncontacting gas bearings; and dynamically balancing the system. Planned engine and component design and test efforts are described.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1993-01-01
The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.
Experimental Results from the Thermal Energy Storage-1 (TES-1) Flight Experiment
NASA Technical Reports Server (NTRS)
Wald, Lawrence W.; Tolbert, Carol; Jacqmin, David
1995-01-01
The Thermal Energy Storage-1 (TES-1) is a flight experiment that flew on the Space Shuttle Columbia (STS-62), in March 1994, as part of the OAST-2 mission. TES-1 is the first experiment in a four experiment suite designed to provide data for understanding the long duration microgravity behavior of thermal energy storage fluoride salts that undergo repeated melting and freezing. Such data have never been obtained before and have direct application for the development of space-based solar dynamic (SD) power systems. These power systems will store solar energy in a thermal energy salt such as lithium fluoride or calcium fluoride. The stored energy is extracted during the shade portion of the orbit. This enables the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes have been developed for predicting performance of a spaced-based solar dynamic power system. Experimental verification of the analytical predictions is needed prior to using the analytical results for future space power design applications. The four TES flight experiments will be used to obtain the needed experimental data. This paper will focus on the flight results from the first experiment, TES-1, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code. The TES-1 conceptual development, hardware design, final development, and system verification testing were accomplished at the NASA lewis Research Center (LeRC). TES-1 was developed under the In-Space Technology Experiment Program (IN-STEP), which sponsors NASA, industry, and university flight experiments designed to enable and enhance space flight technology. The IN-STEP Program is sponsored by the Office of Space Access and Technology (OSAT).
QUANTUM: The Exhibition - quantum at the museum
NASA Astrophysics Data System (ADS)
Laforest, Martin; Olano, Angela; Day-Hamilton, Tobi
Distilling the essence of quantum phenomena, and how they are being harnessed to develop powerful quantum technologies, into a series of bite-sized, elementary-school-level pieces is what the scientific outreach team at the University of Waterloo's Institute for Quantum Computing was tasked with. QUANTUM: The Exhibition uses a series of informational panels, multimedia and interactive displays to introduce visitors to quantum phenomena and how they will revolutionize computing, information security and sensing. We'll discuss some of the approaches we took to convey the essence and impact of quantum mechanics and technologies to a lay audience while ensuring scientific accuracy.
NASA Astrophysics Data System (ADS)
Ness, P. H.; Jacobson, H.
1984-10-01
The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.
The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2016 update
Afgan, Enis; Baker, Dannon; van den Beek, Marius; Blankenberg, Daniel; Bouvier, Dave; Čech, Martin; Chilton, John; Clements, Dave; Coraor, Nate; Eberhard, Carl; Grüning, Björn; Guerler, Aysam; Hillman-Jackson, Jennifer; Von Kuster, Greg; Rasche, Eric; Soranzo, Nicola; Turaga, Nitesh; Taylor, James; Nekrutenko, Anton; Goecks, Jeremy
2016-01-01
High-throughput data production technologies, particularly ‘next-generation’ DNA sequencing, have ushered in widespread and disruptive changes to biomedical research. Making sense of the large datasets produced by these technologies requires sophisticated statistical and computational methods, as well as substantial computational power. This has led to an acute crisis in life sciences, as researchers without informatics training attempt to perform computation-dependent analyses. Since 2005, the Galaxy project has worked to address this problem by providing a framework that makes advanced computational tools usable by non experts. Galaxy seeks to make data-intensive research more accessible, transparent and reproducible by providing a Web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication, or reuse. In this report we highlight recently added features enabling biomedical analyses on a large scale. PMID:27137889
A Systems Model for Power Technology Assessment
NASA Technical Reports Server (NTRS)
Hoffman, David J.
2002-01-01
A computer model is under continuing development at NASA Glenn Research Center that enables first-order assessments of space power technology. The model, an evolution of NASA Glenn's Array Design Assessment Model (ADAM), is an Excel workbook that consists of numerous spreadsheets containing power technology performance data and sizing algorithms. Underlying the model is a number of databases that contain default values for various power generation, energy storage and power management and distribution component parameters. These databases are actively maintained by a team of systems analysts so that they contain state-of-art data as well as the most recent technology performance projections. Sizing of the power subsystems can be accomplished either by using an assumed mass specific power (W/kg) or energy (Wh/kg) or by a bottoms-up calculation that accounts for individual component performance and masses. The power generation, energy storage and power management and distribution subsystems are sized for given mission requirements for a baseline case and up to three alternatives. This allows four different power systems to be sized and compared using consistent assumptions and sizing algorithms. The component sizing models contained in the workbook are modular so that they can be easily maintained and updated. All significant input values have default values loaded from the databases that can be over-written by the user. The default data and sizing algorithms for each of the power subsystems are described in some detail. The user interface and workbook navigational features are also discussed. Finally, an example study case that illustrates the model's capability is presented.
Ocean power technology design optimization
van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen; ...
2017-07-18
For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less
Ocean power technology design optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen
For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less
Exide eyeing technology for high-powered battery
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1999-11-01
Exide Corp. said recently it may soon produce a graphite battery with more than three times the power of today's most advanced production batteries--but with half their weight, far smaller size, and only a third the cost. The Reading-based Exide, the world's largest maker of lead-acid batteries, said it has preliminarily agreed to pay $20 million for a controlling interest in Lion Compact Energy, a privately held company that's researching dual-graphite battery technology said to be cleaner cheaper and more efficient. Exide hopes to turn the technology into the products; it said initial applications include smaller battery-operated devices such asmore » cell phones, cameras, laptop computers, power tools and certain military equipment. Larger devices would follow, and could include wheel chairs, motorcycles, replacement for lead-acid batteries in cars and trucks and, potentially, all-electric vehicles.« less
Using speech recognition to enhance the Tongue Drive System functionality in computer access.
Huo, Xueliang; Ghovanloo, Maysam
2011-01-01
Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing.
Grids, Clouds, and Virtualization
NASA Astrophysics Data System (ADS)
Cafaro, Massimo; Aloisio, Giovanni
This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.
Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites
NASA Astrophysics Data System (ADS)
2002-03-01
Many satellites are an integrated collection of sensors and actuators that require dedicated real-time control. For single processor systems, additional sensors require an increase in computing power and speed to provide the multi-tasking capability needed to service each sensor. Faster processors cost more and consume more power, which taxes a satellite's power resources and may lead to shorter satellite lifetimes. An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator. The design of microdot, a four-bit microcontroller for distributed low-end computing, is presented. The design is based on previous research completed at the Space Electronics Branch, Air Force Research Laboratory (AFRL/VSSE) at Kirtland AFB, NM, and the Air Force Institute of Technology at Wright-Patterson AFB, OH. The Microdot has 29 instructions and a 1K x 4 instruction memory. The distributed computing architecture is based on the Philips Semiconductor I2C Serial Bus Protocol. A prototype was implemented and tested using an Altera Field Programmable Gate Array (FPGA). The prototype was operable to 9.1 MHz. The design was targeted for fabrication in a radiation-hardened-by-design gate-array cell library for the TSMC 0.35 micrometer CMOS process.
Integration of nanoscale memristor synapses in neuromorphic computing architectures
NASA Astrophysics Data System (ADS)
Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis
2013-09-01
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.
Open Source Radiation Hardened by Design Technology
NASA Technical Reports Server (NTRS)
Shuler, Robert
2016-01-01
The proposed technology allows use of the latest microcircuit technology with lowest power and fastest speed, with minimal delay and engineering costs, through new Radiation Hardened by Design (RHBD) techniques that do not require extensive process characterization, technique evaluation and re-design at each Moore's Law generation. The separation of critical node groups is explicitly parameterized so it can be increased as microcircuit technologies shrink. The technology will be open access to radiation tolerant circuit vendors. INNOVATION: This technology would enhance computation intensive applications such as autonomy, robotics, advanced sensor and tracking processes, as well as low power applications such as wireless sensor networks. OUTCOME / RESULTS: 1) Simulation analysis indicates feasibility. 2)Compact voting latch 65 nanometer test chip designed and submitted for fabrication -7/2016. INFUSION FOR SPACE / EARTH: This technology may be used in any digital integrated circuit in which a high level of resistance to Single Event Upsets is desired, and has the greatest benefit outside low earth orbit where cosmic rays are numerous.
Research on Key Technologies of Cloud Computing
NASA Astrophysics Data System (ADS)
Zhang, Shufen; Yan, Hongcan; Chen, Xuebin
With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.
Overview of space power electronic's technology under the CSTI High Capacity Power Program
NASA Technical Reports Server (NTRS)
Schwarze, Gene E.
1994-01-01
The Civilian Space Technology Initiative (CSTI) is a NASA Program targeted at the development of specific technologies in the areas of transportation, operations and science. Each of these three areas consists of major elements and one of the operation's elements is the High Capacity Power element. The goal of this element is to develop the technology base needed to meet the long duration, high capacity power requirements for future NASA initiatives. The High Capacity Power element is broken down into several subelements that includes energy conversion in the areas of the free piston Stirling power converter and thermoelectrics, thermal management, power management, system diagnostics, and environmental compatibility and system's lifetime. A recent overview of the CSTI High capacity Power element and a description of each of the program's subelements is given by Winter (1989). The goals of the Power Management subelement are twofold. The first is to develop, test, and demonstrate high temperature, radiation-resistant power and control components and circuits that will be needed in the Power Conditioning, Control and Transmission (PCCT) subsystem of a space nuclear power system. The results obtained under this goal will also be applicable to the instrumentation and control subsystem of a space nuclear reactor. These components and circuits must perform reliably for lifetimes of 7-10 years. The second goal is to develop analytical models for use in computer simulations of candidate PCCT subsystems. Circuits which will be required for a specific PCCT subsystem will be designed and built to demonstrate their performance and, also, to validate the analytical models and simulations. The tasks under the Power Management subelement will now be described in terms of objectives, approach and present status of work.
Self-learning computers for surgical planning and prediction of postoperative alignment.
Lafage, Renaud; Pesenti, Sébastien; Lafage, Virginie; Schwab, Frank J
2018-02-01
In past decades, the role of sagittal alignment has been widely demonstrated in the setting of spinal conditions. As several parameters can be affected, identifying the driver of the deformity is the cornerstone of a successful treatment approach. Despite the importance of restoring sagittal alignment for optimizing outcome, this task remains challenging. Self-learning computers and optimized algorithms are of great interest in spine surgery as in that they facilitate better planning and prediction of postoperative alignment. Nowadays, computer-assisted tools are part of surgeons' daily practice; however, the use of such tools remains to be time-consuming. NARRATIVE REVIEW AND RESULTS: Computer-assisted methods for the prediction of postoperative alignment consist of a three step analysis: identification of anatomical landmark, definition of alignment objectives, and simulation of surgery. Recently, complex rules for the prediction of alignment have been proposed. Even though this kind of work leads to more personalized objectives, the number of parameters involved renders it difficult for clinical use, stressing the importance of developing computer-assisted tools. The evolution of our current technology, including machine learning and other types of advanced algorithms, will provide powerful tools that could be useful in improving surgical outcomes and alignment prediction. These tools can combine different types of advanced technologies, such as image recognition and shape modeling, and using this technique, computer-assisted methods are able to predict spinal shape. The development of powerful computer-assisted methods involves the integration of several sources of information such as radiographic parameters (X-rays, MRI, CT scan, etc.), demographic information, and unusual non-osseous parameters (muscle quality, proprioception, gait analysis data). In using a larger set of data, these methods will aim to mimic what is actually done by spine surgeons, leading to real tailor-made solutions. Integrating newer technology can change the current way of planning/simulating surgery. The use of powerful computer-assisted tools that are able to integrate several parameters and learn from experience can change the traditional way of selecting treatment pathways and counseling patients. However, there is still much work to be done to reach a desired level as noted in other orthopedic fields, such as hip surgery. Many of these tools already exist in non-medical fields and their adaptation to spine surgery is of considerable interest.
NASA Technical Reports Server (NTRS)
Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.
1993-01-01
A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.
NASA Technical Reports Server (NTRS)
1988-01-01
The research activities of the Lewis Research Center for 1988 are summarized. The projects included are within basic and applied technical disciplines essential to aeropropulsion, space propulsion, space power, and space science/applications. These disciplines are materials science and technology, structural mechanics, life prediction, internal computational fluid mechanics, heat transfer, instruments and controls, and space electronics.
Integrating a Single Tablet PC in Chemistry, Engineering, and Physics Courses
ERIC Educational Resources Information Center
Rogers, James W.; Cox, James R.
2008-01-01
A tablet PC is a versatile computer that combines the computing power of a notebook with the pen functionality of a PDA (Cox and Rogers 2005b). The authors adopted tablet PC technology in order to improve the process and product of the lecture format in their chemistry, engineering, and physics courses. In this high-tech model, a single tablet PC…
Adaptation of XMM-Newton SAS to GRID and VO architectures via web
NASA Astrophysics Data System (ADS)
Ibarra, A.; de La Calle, I.; Gabriel, C.; Salgado, J.; Osuna, P.
2008-10-01
The XMM-Newton Scientific Analysis Software (SAS) is a robust software that has allowed users to produce good scientific results since the beginning of the mission. This has been possible given the SAS capability to evolve with the advent of new technologies and adapt to the needs of the scientific community. The prototype of the Remote Interface for Science Analysis (RISA) presented here, is one such example, which provides remote analysis of XMM-Newton data with access to all the existing SAS functionality, while making use of GRID computing technology. This new technology has recently emerged within the astrophysical community to tackle the ever lasting problem of computer power for the reduction of large amounts of data.
Childhood Forearm Breaks Resulting from Mild Trauma May Indicate Bone Deficits
... a powerful new technology called high-resolution peripheral quantitative computed tomography (HRpQCT), which, unlike DXA, can assess ... persist throughout life. The investigators concluded that additional research is needed to determine if childhood bone weakness ...
Mobile computing device configured to compute irradiance, glint, and glare of the sun
Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib
2014-03-11
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.
2003-04-07
institutions of the Armed Forces and foreign corporations. Within this framework, the Tactical Computer Training System9 (Sistema de Entrenamiento...Chile, where rocket propulsion technology is not well developed because the Armed forces get it from foreign companies. The idea is to be able to...Military Affairs,” Joint Force Quarterly 31 (Summer 2002): 55. 6 Gobierno de Chile, Ministerio de Defensa Nacional, Libro de la Defensa de Chile (Santiago
Present Status of Power Circuit Breaker and its Future
NASA Astrophysics Data System (ADS)
Yoshioka, Yoshio
Gas circuit breaker and vacuum circuit breaker are the 2 main types of circuit breaker used in extra high voltage and medium voltage networks. After reviewing the history of these circuit breakers, their present status and technologies are described. As for future technology, computation of interrupting phenomena, SF6 gas less apparatus and expectation of the high voltage vacuum circuit breaker are discussed.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
Capturing and analyzing wheelchair maneuvering patterns with mobile cloud computing.
Fu, Jicheng; Hao, Wei; White, Travis; Yan, Yuqing; Jones, Maria; Jan, Yih-Kuen
2013-01-01
Power wheelchairs have been widely used to provide independent mobility to people with disabilities. Despite great advancements in power wheelchair technology, research shows that wheelchair related accidents occur frequently. To ensure safe maneuverability, capturing wheelchair maneuvering patterns is fundamental to enable other research, such as safe robotic assistance for wheelchair users. In this study, we propose to record, store, and analyze wheelchair maneuvering data by means of mobile cloud computing. Specifically, the accelerometer and gyroscope sensors in smart phones are used to record wheelchair maneuvering data in real-time. Then, the recorded data are periodically transmitted to the cloud for storage and analysis. The analyzed results are then made available to various types of users, such as mobile phone users, traditional desktop users, etc. The combination of mobile computing and cloud computing leverages the advantages of both techniques and extends the smart phone's capabilities of computing and data storage via the Internet. We performed a case study to implement the mobile cloud computing framework using Android smart phones and Google App Engine, a popular cloud computing platform. Experimental results demonstrated the feasibility of the proposed mobile cloud computing framework.
The research and application of the power big data
NASA Astrophysics Data System (ADS)
Zhang, Suxiang; Zhang, Dong; Zhang, Yaping; Cao, Jinping; Xu, Huiming
2017-01-01
Facing the increasing environment crisis, how to improve energy efficiency is the important problem. Power big data is main support tool to realize demand side management and response. With the promotion of smart power consumption, distributed clean energy and electric vehicles etc get wide application; meanwhile, the continuous development of the Internet of things technology, more applications access the endings in the grid power link, which leads to that a large number of electric terminal equipment, new energy access smart grid, and it will produce massive heterogeneous and multi-state electricity data. These data produce the power grid enterprise's precious wealth, as the power big data. How to transform it into valuable knowledge and effective operation becomes an important problem, it needs to interoperate in the smart grid. In this paper, we had researched the various applications of power big data and integrate the cloud computing and big data technology, which include electricity consumption online monitoring, the short-term power load forecasting and the analysis of the energy efficiency. Based on Hadoop, HBase and Hive etc., we realize the ETL and OLAP functions; and we also adopt the parallel computing framework to achieve the power load forecasting algorithms and propose a parallel locally weighted linear regression model; we study on energy efficiency rating model to comprehensive evaluate the level of energy consumption of electricity users, which allows users to understand their real-time energy consumption situation, adjust their electricity behavior to reduce energy consumption, it provides decision-making basis for the user. With an intelligent industrial park as example, this paper complete electricity management. Therefore, in the future, power big data will provide decision-making support tools for energy conservation and emissions reduction.
1999-12-01
Ajzen , I . and M . Fishbein . Understanding Attitudes and Predicting Social Behavior. ’ Prentice-Hall, Englewood Cliffs, NJ: 1980. Alwang, Greg. "Speech...Decline: Computer Introduction in the Financial Industry." Technology Forecasting and Social Change. 31: 143-154. Fishbein , M . and I . Ajzen . Belief...Theory of Reasoned Action (TRA) ( Fishbein and Ajzen , 1980) 13 3. Theory of Planned Behavior (TPB) ( Ajzen , 1991) 15 4. Technology Acceptance Model
[Media for 21st century--towards human communication media].
Harashima, H
2000-05-01
Today, with the approach of the 21st century, attention is focused on multi-media communications combining computer, visual and audio technologies. This article discusses the communication media target and the technological problems constituting the nucleus of multi-media. The communication media is becoming an environment from which no one can escape. Since the media has such a great power, what is needed now is not to predict the future technologies, but to estimate the future world and take to responsibility for future environments.
New computing systems and their impact on structural analysis and design
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities formore » enterprise-wide optimization, including planning, scheduling, and supply chain technologies.« less
PRaVDA: High Energy Physics towards proton Computed Tomography
NASA Astrophysics Data System (ADS)
Price, T.; PRaVDA Consortium
2016-07-01
Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.
ERIC Educational Resources Information Center
TechTrends, 1992
1992-01-01
Reviews new educational technology products, including a microcomputer-based tutoring system, laser barcode reader, video/data projectors, CD-ROM for notebook computers, a system to increase a printer's power, data cartridge storage shell, knowledge-based decision tool, video illustrator, interactive videodiscs, surge protectors, scanner system,…
Mobile Applications for Extension
ERIC Educational Resources Information Center
Drill, Sabrina L.
2012-01-01
Mobile computing devices (smart phones, tablets, etc.) are rapidly becoming the dominant means of communication worldwide and are increasingly being used for scientific investigation. This technology can further our Extension mission by increasing our power for data collection, information dissemination, and informed decision-making. Mobile…
ERIC Educational Resources Information Center
Agron, Joe, Ed.
1999-01-01
Presents advice from five school administrators on how schools are meeting facility and business challenges in the new millennium. Issues discussed concern power needs, the Y2K computer problem, the explosion of new educational technology, school security, educational finance, and building deterioration. (GR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiter, C.
1998-07-01
The use of coal power generation applications is currently enjoying a renaissance. New highly efficient and cost-effective plant concepts together with environmental protection technologies are the main factors in this development. In addition, coal is available on the world market at attractive prices and in many places it is more readily available than gas. At the economical leading edge, standard power plant concepts have been developed to meet the requirements of emerging power markets. These concepts incorporate the high technological state-of-the-art and are designed to achieve lowest life-cycle costs. Low capital cost, fuel costs and operating costs in combination withmore » shortest lead times are the main assets that make these plants attractive especially for IPPs and Developers. Other aspects of these comprehensive concepts include turnkey construction and the willingness to participate in BOO/BOT projects. One of the various examples of such a concept, the 2 x 610-MW Paiton Private Power Project Phase II in Indonesia, is described in this paper. At the technological leading edge, Siemens has always made a major contribution and was pacemaker for new developments in steam power plant technology. Modern coal-fired steam power plants use computer-optimized process and plant design as well as advanced materials, and achieve efficiencies exceeding 45%. One excellent example of this high technology is the world's largest lignite-fired steam power plant Schwarze Pumpe in Germany, which is equipped with two 800 MW Siemens steam turbine generators with supercritical steam parameters. The world's largest 50-Hz single-shaft turbine generator with supercritical steam parameters rated at 1025 MW for the Niederaussem lignite-fired steam power plant in Germany is a further example of the sophisticated Siemens steam turbine technology and sets a new benchmark in this field.« less
Application of green IT for physics data processing at INCDTIM
NASA Astrophysics Data System (ADS)
Farcas, Felix; Trusca, Radu; Albert, Stefan; Szabo, Izabella; Popeneciu, Gabriel
2012-02-01
Green IT is the next generation technology used in all datacenter around the world. Its benefit is of economic and financial interest. The new technologies are energy efficient, reduce cost and avoid potential disruptions to the existing infrastructure. The most important problem appears at the cooling systems which are the most important in the functionality of a datacenter. Green IT used in Grid Network will benefit the environment and is the next phase in computer infrastructure that will fundamentally change the way we think about and use computing power. At the National Institute for Research and Development of Isotopic and Molecular Technologies Cluj-Napoca (INCDTIM) we have implemented such kind of technology and its support helped us in processing multiple data in different domains, which brought INCDTIM on the major Grid domain with the RO-14-ITIM Grid site. In this paper we present benefits that the new technology brought us and the result obtained in the last year after the implementation of the new green technology.
Higher-order ice-sheet modelling accelerated by multigrid on graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian; Egholm, David
2013-04-01
Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.
Energy 101: Energy Efficient Data Centers
None
2018-04-16
Data centers provide mission-critical computing functions vital to the daily operation of top U.S. economic, scientific, and technological organizations. These data centers consume large amounts of energy to run and maintain their computer systems, servers, and associated high-performance componentsâup to 3% of all U.S. electricity powers data centers. And as more information comes online, data centers will consume even more energy. Data centers can become more energy efficient by incorporating features like power-saving "stand-by" modes, energy monitoring software, and efficient cooling systems instead of energy-intensive air conditioners. These and other efficiency improvements to data centers can produce significant energy savings, reduce the load on the electric grid, and help protect the nation by increasing the reliability of critical computer operations.
Exascale Hardware Architectures Working Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmert, S; Ang, J; Chiang, P
2011-03-15
The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared tomore » memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is relatively immediate, as there is only a small window of opportunity to influence hardware design for 2018 machines. Given the short timeline a firm co-design methodology with vendors is of prime importance.« less
Technology Evaluation and Integration for Heavy Tactical Vehicles
2010-08-17
for Movie - May have to Exit slide show mode UNCLASSIFIED Key Findings- Modular Hydraulic Powered Generator • Hydraulic powered alternator proved...for Movie - May have to Exit slide show mode UNCLASSIFIED PPMS Key Findings Findings: • Hybrid starting system proved functional • Works with wide...to compute inter- vehicle closing distance & stopping time. • Provide audible/visual alert to driver inside their reaction time window. • Use COTS
Research on computer-aided design of modern marine power systems
NASA Astrophysics Data System (ADS)
Ding, Dongdong; Zeng, Fanming; Chen, Guojun
2004-03-01
To make the MPS (Marine Power System) design process more economical and easier, a new CAD scheme is brought forward which takes much advantage of VR (Virtual Reality) and AI (Artificial Intelligence) technologies. This CAD system can shorten the period of design and reduce the requirements on designers' experience in large scale. And some key issues like the selection of hardware and software of such a system are discussed.
Niamtu , J
2001-08-01
Carousel slide presentations have been used for academic and clinical presentations since the late 1950s. However, advances in computer technology have caused a paradigm shift, and digital presentations are quickly becoming standard for clinical presentations. The advantages of digital presentations include cost savings; portability; easy updating capability; Internet access; multimedia functions, such as animation, pictures, video, and sound; and customization to augment audience interest and attention. Microsoft PowerPoint has emerged as the most popular digital presentation software and is currently used by many practitioners with and without significant computer expertise. The user-friendly platform of PowerPoint enables even the novice presenter to incorporate digital presentations into his or her profession. PowerPoint offers many advanced options that, with a minimal investment of time, can be used to create more interactive and professional presentations for lectures, patient education, and marketing. Examples of advanced PowerPoint applications are presented in a stepwise manner to unveil the full power of PowerPoint. By incorporating these techniques, medical practitioners can easily personalize, customize, and enhance their PowerPoint presentations. Complications, pitfalls, and caveats are discussed to detour and prevent misadventures in digital presentations. Relevant Web sites are listed to further update, customize, and communicate PowerPoint techniques.
Radiation chemistry related to nuclear power technology
NASA Astrophysics Data System (ADS)
Ishigure, Kenkichi
A brief review is given to the radiation chemical problems, especially with the emphasis on water radiolysis, in the nuclear power technology. Radiation chemistry in aqueous system is pointed out to be closely related to the problems such as corrosion of Zircaloy, the formation of insoluble corrosion products or crud, stress corrosion cracking of stainless steel in BWR and the radioactive waste managements. The results of the constant extention rate tests on sensitized 304 stainless steel under irradiation are shown, and the computer calculations were carried out to simulate the model experiments on the release of crud from the corroding surface under irradiation and also the water radiolysis in core of BWR.
Computing at the speed limit (supercomputers)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernhard, R.
1982-07-01
The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less
Emerging Computer Media: On Image Interaction
NASA Astrophysics Data System (ADS)
Lippman, Andrew B.
1982-01-01
Emerging technologies such as inexpensive, powerful local computing, optical digital videodiscs, and the technologies of human-machine interaction are initiating a revolution in both image storage systems and image interaction systems. This paper will present a review of new approaches to computer media predicated upon three dimensional position sensing, speech recognition, and high density image storage. Examples will be shown such as the Spatial Data Management Systems wherein the free use of place results in intuitively clear retrieval systems and potentials for image association; the Movie-Map, wherein inherently static media generate dynamic views of data, and conferencing work-in-progress wherein joint processing is stressed. Application to medical imaging will be suggested, but the primary emphasis is on the general direction of imaging and reference systems. We are passing the age of simple possibility of computer graphics and image porcessing and entering the age of ready usability.
Earth Science Informatics Comes of Age
NASA Technical Reports Server (NTRS)
Jodha, Siri; Khalsa, S.; Ramachandran, Rahul
2014-01-01
The volume and complexity of Earth science data have steadily increased, placing ever-greater demands on researchers, software developers and data managers tasked with handling such data. Additional demands arise from requirements being levied by funding agencies and governments to better manage, preserve and provide open access to data. Fortunately, over the past 10-15 years significant advances in information technology, such as increased processing power, advanced programming languages, more sophisticated and practical standards, and near-ubiquitous internet access have made the jobs of those acquiring, processing, distributing and archiving data easier. These advances have also led to an increasing number of individuals entering the field of informatics as it applies to Geoscience and Remote Sensing. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of data, information, and knowledge. Informatics also encompasses the use of computers and computational methods to support decisionmaking and other applications for societal benefits.
Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry
NASA Technical Reports Server (NTRS)
Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul
2003-01-01
Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.
NASA Technical Reports Server (NTRS)
1997-01-01
Small Business Innovation Research contracts from Goddard Space Flight Center to Thermacore Inc. have fostered the company work on devices tagged "heat pipes" for space application. To control the extreme temperature ranges in space, heat pipes are important to spacecraft. The problem was to maintain an 8-watt central processing unit (CPU) at less than 90 C in a notebook computer using no power, with very little space available and without using forced convection. Thermacore's answer was in the design of a powder metal wick that transfers CPU heat from a tightly confined spot to an area near available air flow. The heat pipe technology permits a notebook computer to be operated in any position without loss of performance. Miniature heat pipe technology has successfully been applied, such as in Pentium Processor notebook computers. The company expects its heat pipes to accommodate desktop computers as well. Cellular phones, camcorders, and other hand-held electronics are forsible applications for heat pipes.
Polymer waveguides for electro-optical integration in data centers and high-performance computers.
Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan
2015-02-23
To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.
NASA Astrophysics Data System (ADS)
Arkadov, G. V.; Zhukavin, A. P.; Kroshilin, A. E.; Parshikov, I. A.; Solov'ev, S. L.; Shishov, A. V.
2014-10-01
The article describes the "Virtual Digital VVER-Based Nuclear Power Plant" computerized system comprising a totality of verified initial data (sets of input data for a model intended for describing the behavior of nuclear power plant (NPP) systems in design and emergency modes of their operation) and a unified system of new-generation computation codes intended for carrying out coordinated computation of the variety of physical processes in the reactor core and NPP equipment. Experiments with the demonstration version of the "Virtual Digital VVER-Based NPP" computerized system has shown that it is in principle possible to set up a unified system of computation codes in a common software environment for carrying out interconnected calculations of various physical phenomena at NPPs constructed according to the standard AES-2006 project. With the full-scale version of the "Virtual Digital VVER-Based NPP" computerized system put in operation, the concerned engineering, design, construction, and operating organizations will have access to all necessary information relating to the NPP power unit project throughout its entire lifecycle. The domestically developed commercial-grade software product set to operate as an independently operating application to the project will bring about additional competitive advantages in the modern market of nuclear power technologies.
Automated distribution system management for multichannel space power systems
NASA Technical Reports Server (NTRS)
Fleck, G. W.; Decker, D. K.; Graves, J.
1983-01-01
A NASA sponsored study of space power distribution system technology is in progress to develop an autonomously managed power system (AMPS) for large space power platforms. The multichannel, multikilowatt, utility-type power subsystem proposed presents new survivability requirements and increased subsystem complexity. The computer controls under development for the power management system must optimize the power subsystem performance and minimize the life cycle cost of the platform. A distribution system management philosophy has been formulated which incorporates these constraints. Its implementation using a TI9900 microprocessor and FORTH as the programming language is presented. The approach offers a novel solution to the perplexing problem of determining the optimal combination of loads which should be connected to each power channel for a versatile electrical distribution concept.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alchorn, A L
Thank you for your interest in the activities of the Lawrence Livermore National Laboratory Computation Directorate. This collection of articles from the Laboratory's Science & Technology Review highlights the most significant computational projects, achievements, and contributions during 2002. In 2002, LLNL marked the 50th anniversary of its founding. Scientific advancement in support of our national security mission has always been the core of the Laboratory. So that researchers could better under and predict complex physical phenomena, the Laboratory has pushed the limits of the largest, fastest, most powerful computers in the world. In the late 1950's, Edward Teller--one of themore » LLNL founders--proposed that the Laboratory commission a Livermore Advanced Research Computer (LARC) built to Livermore's specifications. He tells the story of being in Washington, DC, when John Von Neumann asked to talk about the LARC. He thought Teller wanted too much memory in the machine. (The specifications called for 20-30,000 words.) Teller was too smart to argue with him. Later Teller invited Von Neumann to the Laboratory and showed him one of the design codes being prepared for the LARC. He asked Von Neumann for suggestions on fitting the code into 10,000 words of memory, and flattered him about ''Labbies'' not being smart enough to figure it out. Von Neumann dropped his objections, and the LARC arrived with 30,000 words of memory. Memory, and how close memory is to the processor, is still of interest to us today. Livermore's first supercomputer was the Remington-Rand Univac-1. It had 5600 vacuum tubes and was 2 meters wide by 4 meters long. This machine was commonly referred to as a 1 KFlop machine [E+3]. Skip ahead 50 years. The ASCI White machine at the Laboratory today, produced by IBM, is rated at a peak performance of 12.3 TFlops or E+13. We've improved computer processing power by 10 orders of magnitude in 50 years, and I do not believe there's any reason to think we won't improve another 10 orders of magnitude in the next 50 years. For years I have heard talk of hitting the physical limits of Moore's Law, but new technologies will take us into the next phase of computer processing power such as 3-D chips, molecular computing, quantum computing, and more. Big computers are icons or symbols of the culture and larger infrastructure that exists at LLNL to guide scientific discovery and engineering development. We have dealt with balance issues for 50 years and will continue to do so in our quest for a digital proxy of the properties of matter at extremely high temperatures and pressures. I believe that the next big computational win will be the merger of high-performance computing with information management. We already create terabytes--soon to be petabytes--of data. Efficiently storing, finding, visualizing and extracting data and turning that into knowledge which aids decision-making and scientific discovery is an exciting challenge. In the meantime, please enjoy this retrospective on computational physics, computer science, advanced software technologies, and applied mathematics performed by programs and researchers at LLNL during 2002. It offers a glimpse into the stimulating world of computational science in support of the national missions and homeland defense.« less
from Colorado School of Mines. His research interests include optical modeling, computational fluid dynamics, and heat transfer. His work involves optical performance modeling of concentrating solar power experience includes developing thermal and optical models of CSP components at Norwich Solar Technologies
Sustainable mobile information infrastructures in low resource settings.
Braa, Kristin; Purkayastha, Saptarshi
2010-01-01
Developing countries represent the fastest growing mobile markets in the world. For people with no computing access, a mobile will be their first computing device. Mobile technologies offer a significant potential to strengthen health systems in developing countries with respect to community based monitoring, reporting, feedback to service providers, and strengthening communication and coordination between different health functionaries, medical officers and the community. However, there are various challenges in realizing this potential including technological such as lack of power, social, institutional and use issues. In this paper a case study from India on mobile health implementation and use will be reported. An underlying principle guiding this paper is to see mobile technology not as a "stand alone device" but potentially an integral component of an integrated mobile supported health information infrastructure.
Methodolgy For Evaluation Of Technology Impacts In Space Electric Power Systems
NASA Technical Reports Server (NTRS)
Holda, Julie
2004-01-01
The Analysis and Management branch of the Power and Propulsion Office at NASA Glenn Research Center is responsible for performing complex analyses of the space power and In-Space propulsion products developed by GRC. This work quantifies the benefits of the advanced technologies to support on-going advocacy efforts. The Power and Propulsion Office is committed to understanding how the advancement in space technologies could benefit future NASA missions. They support many diverse projects and missions throughout NASA as well as industry and academia. The area of work that we are concentrating on is space technology investment strategies. Our goal is to develop a Monte-Carlo based tool to investigate technology impacts in space electric power systems. The framework is being developed at this stage, which will be used to set up a computer simulation of a space electric power system (EPS). The outcome is expected to be a probabilistic assessment of critical technologies and potential development issues. We are developing methods for integrating existing spreadsheet-based tools into the simulation tool. Also, work is being done on defining interface protocols to enable rapid integration of future tools. Monte Carlo-based simulation programs for statistical modeling of the EPS Model. I decided to learn and evaluate Palisade's @Risk and Risk Optimizer software, and utilize it's capabilities for the Electric Power System (EPS) model. I also looked at similar software packages (JMP, SPSS, Crystal Ball, VenSim, Analytica) available from other suppliers and evaluated them. The second task was to develop the framework for the tool, in which we had to define technology characteristics using weighing factors and probability distributions. Also we had to define the simulation space and add hard and soft constraints to the model. The third task is to incorporate (preliminary) cost factors into the model. A final task is developing a cross-platform solution of this framework.
Microprocessor control and networking for the amps breadboard
NASA Technical Reports Server (NTRS)
Floyd, Stephen A.
1987-01-01
Future space missions will require more sophisticated power systems, implying higher costs and more extensive crew and ground support involvement. To decrease this human involvement, as well as to protect and most efficiently utilize this important resource, NASA has undertaken major efforts to promote progress in the design and development of autonomously managed power systems. Two areas being actively pursued are autonomous power system (APS) breadboards and knowledge-based expert system (KBES) applications. The former are viewed as a requirement for the timely development of the latter. Not only will they serve as final testbeds for the various KBES applications, but will play a major role in the knowledge engineering phase of their development. The current power system breadboard designs are of a distributed microprocessor nature. The distributed nature, plus the need to connect various external computer capabilities (i.e., conventional host computers and symbolic processors), places major emphasis on effective networking. The communications and networking technologies for the first power system breadboard/test facility are described.
COMPUTER-AIDED DRUG DISCOVERY AND DEVELOPMENT (CADDD): in silico-chemico-biological approach
Kapetanovic, I.M.
2008-01-01
It is generally recognized that drug discovery and development are very time and resources consuming processes. There is an ever growing effort to apply computational power to the combined chemical and biological space in order to streamline drug discovery, design, development and optimization. In biomedical arena, computer-aided or in silico design is being utilized to expedite and facilitate hit identification, hit-to-lead selection, optimize the absorption, distribution, metabolism, excretion and toxicity profile and avoid safety issues. Commonly used computational approaches include ligand-based drug design (pharmacophore, a 3-D spatial arrangement of chemical features essential for biological activity), structure-based drug design (drug-target docking), and quantitative structure-activity and quantitative structure-property relationships. Regulatory agencies as well as pharmaceutical industry are actively involved in development of computational tools that will improve effectiveness and efficiency of drug discovery and development process, decrease use of animals, and increase predictability. It is expected that the power of CADDD will grow as the technology continues to evolve. PMID:17229415
Shulaker, Max M; Hills, Gage; Patil, Nishant; Wei, Hai; Chen, Hong-Yu; Wong, H-S Philip; Mitra, Subhasish
2013-09-26
The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy-delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. Owing to substantial fundamental imperfections inherent in CNTs, however, only very basic circuit blocks have been demonstrated. Here we show how these imperfections can be overcome, and demonstrate the first computer built entirely using CNT-based transistors. The CNT computer runs an operating system that is capable of multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we implement 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This experimental demonstration is the most complex carbon-based electronic system yet realized. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next generation of highly energy-efficient electronic systems.
NASA Technical Reports Server (NTRS)
Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.
1981-01-01
An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Emerging CAE technologies and their role in Future Ambient Intelligence Environments
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2011-03-01
Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.
2011 Computation Directorate Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2012-04-11
From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less
NASA Technical Reports Server (NTRS)
Moses, Robert W.
2004-01-01
NASA's exploration goals for Mars and Beyond will require new power systems and in situ resource utilization technologies. Regenerative aerobraking may offer a revolutionary approach for in situ power generation and oxygen harvesting during these exploration missions. In theory, power and oxygen can be collected during aerobraking and stored for later use in orbit or on the planet. This technology would capture energy and oxygen from the plasma field that occurs naturally during hypersonic entry using well understood principles of magnetohydrodynamics and oxygen filtration. This innovative approach generates resources upon arrival at the operational site, and thus greatly differs from the traditional approach of taking everything you need with you from Earth. Fundamental analysis, computational fluid dynamics, and some testing of experimental hardware have established the basic feasibility of generating power during a Mars entry. Oxygen filtration at conditions consistent with spacecraft entry parameters at Mars has been studied to a lesser extent. Other uses of the MHD power are presented. This paper illustrates how some features of regenerative aerobraking may be applied to support human and robotic missions at Mars.
Lunar PMAD technology assessment
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
1992-01-01
This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.
Scientific American Inventions From Outer Space: Everyday Uses For NASA Technology
NASA Technical Reports Server (NTRS)
Baker, David
2000-01-01
The purpose of this book is to present some of the inventions highlighted in the yearly publication of the National Aeronautics and Space Administration (NASA) Spinoff. These inventions cover a wide range, some of which include improvements in health, medicine, public safety, energy, environment, resource management, computer technology, automation, construction, transportation, and manufacturing technology. NASA technology has brought forth thousands of commercial products which include athletic shoes, portable x-ray machines, and scratch-resistant sunglasses, guidance systems, lasers, solar power, robotics and prosthetic devices. These products are examples of NASA research innovations which have positively impacted the community.
High performance computing and communications: Advancing the frontiers of information technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less
Cyber-workstation for computational neuroscience.
Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C
2010-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.
Cyber-Workstation for Computational Neuroscience
DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.
2009-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436
2010-03-01
DATES COVERED (From - To) October 2008 – October 2009 4 . TITLE AND SUBTITLE PERFORMANCE AND POWER OPTIMIZATION FOR COGNITIVE PROCESSOR DESIGN USING...Computations 2 2.2 Cognitive Models and Algorithms for Intelligent Text Recognition 4 2.2.1 Brain-State-in-a-Box Neural Network Model. 4 2.2.2...The ASIC-style design and synthesis flow for FPU 8 Figure 4 : Screen shots of the final layouts 10 Figure 5: Projected performance and power roadmap
NASA Astrophysics Data System (ADS)
Kashansky, Vladislav V.; Kaftannikov, Igor L.
2018-02-01
Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.
Computation of glint, glare, and solar irradiance distribution
Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh
2017-08-01
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.
Computation of glint, glare, and solar irradiance distribution
Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh
2015-08-11
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.
1999-01-01
We are on the path to meet the major challenges ahead for TCAD (technology computer aided design). The emerging computational grid will ultimately solve the challenge of limited computational power. The Modular TCAD Framework will solve the TCAD software challenge once TCAD software developers realize that there is no other way to meet industry's needs. The modular TCAD framework (MTF) also provides the ideal platform for solving the TCAD model challenge by rapid implementation of models in a partial differential solver.
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
Assessment of Li/SOCL2 Battery Technology; Reserve, Thin-Cell Design. Volume 3
1990-06-01
power density and efficiency of an operating electrochemical system . The method is general - the examples to illustrate the selected points pertain to... System : Design, Manufacturing and QC Considerations), S. Szpak, P. A. Mosier-Boss, and J. J. Smith, 34th International Power Sources Symposium, Cherry...I) the computer time required to evaluate the integral in Eqn. Ill, and (iii the lack of generality in the attainable lineshapes. However, since this
Accelerating Technology Development through Integrated Computation and Experimentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekhawat, Dushyant; Srivastava, Rameshwar D.; Ciferno, Jared
2013-08-15
This special section of Energy & Fuels comprises a selection of papers presented at the topical conference “Accelerating Technology Development through Integrated Computation and Experimentation”, sponsored and organized by the United States Department of Energy’s National Energy Technology Laboratory (NETL) as part of the 2012 American Institute of Chemical Engineers (AIChE) Annual Meeting held in Pittsburgh, PA, Oct 28-Nov 2, 2012. That topical conference focused on the latest research and development efforts in five main areas related to fossil energy, with each area focusing on the utilization of both experimental and computational approaches: (1) gas separations (membranes, sorbents, and solventsmore » for CO{sub 2}, H{sub 2}, and O{sub 2} production), (2) CO{sub 2} utilization (enhanced oil recovery, chemical production, mineralization, etc.), (3) carbon sequestration (flow in natural systems), (4) advanced power cycles (oxy-combustion, chemical looping, gasification, etc.), and (5) fuel processing (H{sub 2} production for fuel cells).« less
Using Speech Recognition to Enhance the Tongue Drive System Functionality in Computer Access
Huo, Xueliang; Ghovanloo, Maysam
2013-01-01
Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing. PMID:22255801
Virtual reality simulation in neurosurgery: technologies and evolution.
Chan, Sonny; Conti, François; Salisbury, Kenneth; Blevins, Nikolas H
2013-01-01
Neurosurgeons are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. With improvements in computational power and advances in visual and haptic display technologies, virtual surgical environments can now offer potential benefits for surgical training, planning, and rehearsal in a safe, simulated setting. This article introduces the various classes of surgical simulators and their respective purposes through a brief survey of representative simulation systems in the context of neurosurgery. Many technical challenges currently limit the application of virtual surgical environments. Although we cannot yet expect a digital patient to be indistinguishable from reality, new developments in computational methods and related technology bring us closer every day. We recognize that the design and implementation of an immersive virtual reality surgical simulator require expert knowledge from many disciplines. This article highlights a selection of recent developments in research areas related to virtual reality simulation, including anatomic modeling, computer graphics and visualization, haptics, and physics simulation, and discusses their implication for the simulation of neurosurgery.
Delivery Systems for Distance Education. ERIC Digest.
ERIC Educational Resources Information Center
Schamber, Linda
This ERIC digest provides a brief overview of the video, audio, and computer technologies that are currently used to deliver instruction for distance education programs. The video systems described include videoconferencing, low-power television (LPTV), closed-circuit television (CCTV), instructional fixed television service (ITFS), and cable…
A computer network with scada and case tools for on-line process control in greenhouses
NASA Astrophysics Data System (ADS)
Gieling, Th. H.; van Meurs, W. Th. M.; Janssen, H. J. J.
Climate control computers in greenhouses are used to control heating and ventilation, supply water and dilute and dispense nutrients. They integrate models into optimally controlled systems. This paper describes how information technology, as in use in other sectors of industry, is applied to greenhouse control. The introduction of modern software and hardware concepts in horticulture adds power and extra opportunities to climate control in greenhouses.
NASA Technical Reports Server (NTRS)
Orr, Joel N.
1995-01-01
This reflection of human-computer interface and its requirements as virtual technology is advanced, proposes a new term: 'Pezonomics'. The term replaces the term ergonomics ('the law of work') with a definition pointing to 'the law of play.' The necessity of this term, the author reasons, comes from the need to 'capture the essence of play and calibrate our computer systems to its cadences.' Pezonomics will ensure that artificial environments, in particular virtual reality, are user friendly.
A computer network with SCADA and case tools for on-line process control in greenhouses.
Gieling ThH; van Meurs WTh; Janssen, H J
1996-01-01
Climate control computers in greenhouses are used to control heating and ventilation, supply water and dilute and dispense nutrients. They integrate models into optimally controlled systems. This paper describes how information technology, as in use in other sectors of industry, is applied to greenhouse control. The introduction of modern software and hardware concepts in horticulture adds power and extra oppurtunities to climate contol in greenhouses.
An on-line reactivity and power monitor for a TRIGA reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binney, Stephen E.; Bakir, Alia J.
1988-07-01
As the personal computer (PC) becomes more and more of a significant influence on modern technology, it is reasonable that at some point in time they would be used to interface with TRIGA reactors. A personal computer with a special interface board has been used to monitor key parameters during operation of the Oregon State University TRIGA Reactor (OSTR). A description of the apparatus used and sample results are included.
Dynamic VMs placement for energy efficiency by PSO in cloud computing
NASA Astrophysics Data System (ADS)
Dashti, Seyed Ebrahim; Rahmani, Amir Masoud
2016-03-01
Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.
Triple-server blind quantum computation using entanglement swapping
NASA Astrophysics Data System (ADS)
Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua
2014-04-01
Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.
Rich client data exploration and research prototyping for NOAA
NASA Astrophysics Data System (ADS)
Grossberg, Michael; Gladkova, Irina; Guch, Ingrid; Alabi, Paul; Shahriar, Fazlul; Bonev, George; Aizenman, Hannah
2009-08-01
Data from satellites and model simulations is increasing exponentially as observations and model computing power improve rapidly. Not only is technology producing more data, but it often comes from sources all over the world. Researchers and scientists who must collaborate are also located globally. This work presents a software design and technologies which will make it possible for groups of researchers to explore large data sets visually together without the need to download these data sets locally. The design will also make it possible to exploit high performance computing remotely and transparently to analyze and explore large data sets. Computer power, high quality sensing, and data storage capacity have improved at a rate that outstrips our ability to develop software applications that exploit these resources. It is impractical for NOAA scientists to download all of the satellite and model data that may be relevant to a given problem and the computing environments available to a given researcher range from supercomputers to only a web browser. The size and volume of satellite and model data are increasing exponentially. There are at least 50 multisensor satellite platforms collecting Earth science data. On the ground and in the sea there are sensor networks, as well as networks of ground based radar stations, producing a rich real-time stream of data. This new wealth of data would have limited use were it not for the arrival of large-scale high-performance computation provided by parallel computers, clusters, grids, and clouds. With these computational resources and vast archives available, it is now possible to analyze subtle relationships which are global, multi-modal and cut across many data sources. Researchers, educators, and even the general public, need tools to access, discover, and use vast data center archives and high performance computing through a simple yet flexible interface.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Vinod
2017-05-05
High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly butmore » important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.« less
NAS Technical Summaries, March 1993 - February 1994
NASA Technical Reports Server (NTRS)
1995-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993
NASA Technical Reports Server (NTRS)
1994-01-01
NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulliver, John S.
2015-03-01
Conventional hydropower turbine aeration test-bed for computational routines and software tools for improving environmental mitigation technologies for conventional hydropower systems. In achieving this goal, we have partnered with Alstom, a global leader in energy technology development and United States power generation, with additional funding from the Initiative for Renewable Energy and the Environment (IREE) and the College of Science and Engineering (CSE) at the UMN
Using Multi-Core Systems for Rover Autonomy
NASA Technical Reports Server (NTRS)
Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.
2010-01-01
Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality
NASA Astrophysics Data System (ADS)
Chen, R.; Xi, X.; Zhao, X.; He, L.; Yao, H.; Shen, R.
2016-12-01
Dense 3D magnetotelluric (MT) data acquisition owns the benefit of suppressing the static shift and topography effect, can achieve high precision and high resolution inversion for underground structure. This method may play an important role in mineral exploration, geothermal resources exploration, and hydrocarbon exploration. It's necessary to reduce the power consumption greatly of a MT signal receiver for large-scale 3D MT data acquisition while using sensor network to monitor data quality of deployed MT receivers. We adopted a series of technologies to realized above goal. At first, we designed an low-power embedded computer which can couple with other parts of MT receiver tightly and support wireless sensor network. The power consumption of our embedded computer is less than 1 watt. Then we designed 4-channel data acquisition subsystem which supports 24-bit analog-digital conversion, GPS synchronization, and real-time digital signal processing. Furthermore, we developed the power supply and power management subsystem for MT receiver. At last, a series of software, which support data acquisition, calibration, wireless sensor network, and testing, were developed. The software which runs on personal computer can monitor and control over 100 MT receivers on the field for data acquisition and quality control. The total power consumption of the receiver is about 2 watts at full operation. The standby power consumption is less than 0.1 watt. Our testing showed that the MT receiver can acquire good quality data at ground with electrical dipole length as 3 m. Over 100 MT receivers were made and used for large-scale geothermal exploration in China with great success.
Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Keren
Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less
Computers and videodiscs in pathology education: ECLIPS as an example of one approach.
Thursh, D R; Mabry, F; Levy, A H
1986-03-01
We have enumerated ways in which the evolving computer and videodisc technologies are being used in pathology education and discussed in some detail the particular use with which we are most familiar, text management. While it is probably premature to speculate as to how these technologies will ultimately affect pathology education, one recent trend--the convergence that seems to be developing between those working on expert consulting systems and those working primarily on educational applications--will probably influence this impact substantially. We believe that we are moving, from opposite directions, toward the same end result, namely, the use of machine intelligence to facilitate and augment human learning. We expect that, as the two groups come closer together, very powerful, interesting, and eminently useful educational tools will emerge. While this is occurring, we think that most would agree that one of the very urgent needs is to develop forums in which the academic and practice communities can interact with researchers and developers. With apologies to Clemenceau, computers are rapidly becoming too important to be left exclusively to computer scientists. Such forums would serve to give these communities a chance to learn what the new technologies have to offer and give developers a better idea of where these technologies can make the greatest contributions.
Use of high performance networks and supercomputers for real-time flight simulation
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1993-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.
Terascale Computing in Accelerator Science and Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Kwok
2002-08-21
We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aidedmore » engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.« less
An Integrated Software Package to Enable Predictive Simulation Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang
The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less
High Performance, Dependable Multiprocessor
NASA Technical Reports Server (NTRS)
Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric;
2006-01-01
With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling and dynamic replanning.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling an dynamic replanning.
A Multi-Temporal Context-Aware System for Competences Management
ERIC Educational Resources Information Center
Rosa, João H.; Barbosa, Jorge L.; Kich, Marcos; Brito, Lucas
2015-01-01
The evolution of computing technology and wireless networks has contributed to the miniaturization of mobile devices and their increase in power, providing services anywhere and anytime. In this scenario, applications have considered the user's contexts to make decisions (Context Awareness). Context-aware applications have enabled new…
Symbionic Technology and Education. Report 83-02.
ERIC Educational Resources Information Center
Cartwright, Glenn F.
Research findings indicate that major breakthroughs in education will have to occur through direct cortical intervention, using either chemical or electronic means. It will eventually be possible to build sophisticated intelligence amplifiers that will be internal extensions of our brains, significantly more powerful than present day computers,…
TOOLS FOR PRESENTING SPATIAL AND TEMPORAL PATTERNS OF ENVIRONMENTAL MONITORING DATA
The EPA Health Effects Research Laboratory has developed this data presentation tool for use with a variety of types of data which may contain spatial and temporal patterns of interest. he technology links mainframe computing power to the new generation of "desktop publishing" ha...
The NASA space power technology program
NASA Technical Reports Server (NTRS)
Stephenson, R. Rhoads
1992-01-01
NASA has a broad technology program in the field of space power. This paper describes that program, including the roles and responsibilities of the various NASA field centers and major contractors. In the power source area, the paper discusses the SP-100 Space Nuclear Power Project, which has been under way for about seven years and is making substantial progress toward development of components for a 100-kilowatt power system that can be scaled to other sizes. This system is a candidate power source for nuclear electric propulsion, as well as for a power plant for a lunar base. In the energy storage area, the paper describes NASA's battery- and fuel-cell development programs. NASA is actively working on NiCd, NiH2, and lithium batteries. A status update is also given on a U.S. Air Force-sponsored program to develop a large (150 ampere-hour) lithium-thionyl chloride battery for the Centaur upper-stage launch vehicle. Finally, the area of power management and distribution (PMAD) is addressed, including power system components such as solid-state switches and power integrated circuits. Automated load management and other computer-controlled functions offer considerable payoffs. The state of the art in space power is described, along with NASA's medium- and long-term goals in the area.
Robotic insects: Manufacturing, actuation, and power considerations
NASA Astrophysics Data System (ADS)
Wood, Robert
2015-12-01
As the characteristic size of a flying robot decreases, the challenges for successful flight revert to basic questions of fabrication, actuation, fluid mechanics, stabilization, and power - whereas such questions have in general been answered for larger aircraft. When developing a robot on the scale of a housefly, all hardware must be developed from scratch as there is nothing "off-the-shelf" which can be used for mechanisms, sensors, or computation that would satisfy the extreme mass and power limitations. With these challenges in mind, this talk will present progress in the essential technologies for insect-like robots with an emphasis on multi-scale manufacturing methods, high power density actuation, and energy-efficient power distribution.
Perspectives on the Future of CFD
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2000-01-01
This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.
Wearable computer technology for dismounted applications
NASA Astrophysics Data System (ADS)
Daniels, Reginald
2010-04-01
Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.
High performance real-time flight simulation at NASA Langley
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1994-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.
A Wireless Biomedical Signal Interface System-on-Chip for Body Sensor Networks.
Lei Wang; Guang-Zhong Yang; Jin Huang; Jinyong Zhang; Li Yu; Zedong Nie; Cumming, D R S
2010-04-01
Recent years have seen the rapid development of biosensor technology, system-on-chip design, wireless technology. and ubiquitous computing. When assembled into an autonomous body sensor network (BSN), the technologies become powerful tools in well-being monitoring, medical diagnostics, and personal connectivity. In this paper, we describe the first demonstration of a fully customized mixed-signal silicon chip that has most of the attributes required for use in a wearable or implantable BSN. Our intellectual-property blocks include low-power analog sensor interface for temperature and pH, a data multiplexing and conversion module, a digital platform based around an 8-b microcontroller, data encoding for spread-spectrum wireless transmission, and a RF section requiring very few off-chip components. The chip has been fully evaluated and tested by connection to external sensors, and it satisfied typical system requirements.
Robotic vehicles for planetary exploration
NASA Astrophysics Data System (ADS)
Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry
A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.
Design of Remote Monitoring System of Irrigation based on GSM and ZigBee Technology
NASA Astrophysics Data System (ADS)
Xiao xi, Zheng; Fang, Zhao; Shuaifei, Shao
2018-03-01
To solve the problems of low level of irrigation and waste of water resources, a remote monitoring system for farmland irrigation based on GSM communication technology and ZigBee technology was designed. The system is composed of sensors, GSM communication module, ZigBee module, host computer, valve and so on. The system detects and closes the pump and the electromagnetic valve according to the need of the system, and transmits the monitoring information to the host computer or the user’s Mobile phone through the GSM communication network. Experiments show that the system has low power consumption, friendly man-machine interface, convenient and simple. It can monitor agricultural environment remotely and control related irrigation equipment at any time and place, and can better meet the needs of remote monitoring of farmland irrigation.
The main challenges that remain in applying high-throughput sequencing to clinical diagnostics.
Loeffelholz, Michael; Fofanov, Yuriy
2015-01-01
Over the last 10 years, the quality, price and availability of high-throughput sequencing instruments have improved to the point that this technology may be close to becoming a routine tool in the diagnostic microbiology laboratory. Two groups of challenges, however, have to be resolved in order to move this powerful research technology into routine use in the clinical microbiology laboratory. The computational/bioinformatics challenges include data storage cost and privacy concerns, requiring analysis to be performed without access to cloud storage or expensive computational infrastructure. The logistical challenges include interpretation of complex results and acceptance and understanding of the advantages and limitations of this technology by the medical community. This article focuses on the approaches to address these challenges, such as file formats, algorithms, data collection, reporting and good laboratory practices.
Robotic vehicles for planetary exploration
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry
1992-01-01
A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.
Micro-computed tomography imaging and analysis in developmental biology and toxicology.
Wise, L David; Winkelmann, Christopher T; Dogdas, Belma; Bagchi, Ansuman
2013-06-01
Micro-computed tomography (micro-CT) is a high resolution imaging technique that has expanded and strengthened in use since it was last reviewed in this journal in 2004. The technology has expanded to include more detailed analysis of bone, as well as soft tissues, by use of various contrast agents. It is increasingly applied to questions in developmental biology and developmental toxicology. Relatively high-throughput protocols now provide a powerful and efficient means to evaluate embryos and fetuses subjected to genetic manipulations or chemical exposures. This review provides an overview of the technology, including scanning, reconstruction, visualization, segmentation, and analysis of micro-CT generated images. This is followed by a review of more recent applications of the technology in some common laboratory species that highlight the diverse issues that can be addressed. Copyright © 2013 Wiley Periodicals, Inc.
System Control Applications of Low-Power Radio Frequency Devices
NASA Astrophysics Data System (ADS)
van Rensburg, Roger
2017-09-01
This paper conceptualizes a low-power wireless sensor network design for application employment to reduce theft of portable computer devices used in educational institutions today. The aim of this study is to design and develop a reliable and robust wireless network that can eradicate accessibility of a device’s human interface. An embedded system supplied by an energy harvesting source, installed on the portable computer device, may represent one of multiple slave nodes which request regular updates from a standalone master station. A portable computer device which is operated in an undesignated area or in a field perimeter where master to slave communication is restricted, indicating a possible theft scenario, will initiate a shutdown of its operating system and render the device unusable. Consequently, an algorithm in the device firmware may ensure the necessary steps are executed to track the device, irrespective whether the device is enabled. Design outcomes thus far indicate that a wireless network using low-power embedded hardware, is feasible for anti-theft applications. By incorporating one of the latest Bluetooth low-energy, ANT+, ZigBee or Thread wireless technologies, an anti-theft system may be implemented that has the potential to reduce major portable computer device theft in institutions of digitized learning.
A SINDA thermal model using CAD/CAE technologies
NASA Technical Reports Server (NTRS)
Rodriguez, Jose A.; Spencer, Steve
1992-01-01
The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.
"EcoRadiology"--pulling the plug on wasted energy in the radiology department.
McCarthy, Colin J; Gerstenmaier, Jan F; O' Neill, Ailbhe C; McEvoy, Sinead H; Hegarty, Chris; Heffernan, Eric J
2014-12-01
We sought to evaluate the power consumption of various devices around the radiology department, audit our use of recycling, and review efforts by vendors to reduce the environmental impact of their products. Using a readily available power monitor, we calculated the power consumption of different devices around our department. In particular, we calculated the financial and environmental cost of leaving equipment on overnight and/or at weekends. When it was not possible to measure energy usage directly, we obtained and reviewed relevant technical manuals. We contacted vendors directly to document how the environmental impact of new technology and decommissioning aging technology is being tackled. We found that 29 of 43 desktop computers and 25 of 27 picture archiving and communications system (PACS) reporting stations were left on needlessly overnight and/or at weekends, resulting in estimated electrical running costs while not in use of approximately $7253 per year, and CO2 emissions equivalent to the annual emissions of over 10 passenger cars. We discovered that none of our PACS reporting stations supported energy-saving modes such as "sleep" or "hibernate." Despite encouraging staff to turn off computers when not in use, a reaudit found no improvement in results. Simple steps such as turning off computers and air-conditioning units can produce very significant financial and environmental savings. Radiology can lead the way in making hospitals more energy efficient. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Technology and the future of medical equipment maintenance.
Wear, J O
1999-05-01
Maintenance of medical equipment has been changing rapidly in the past few years. It is changing more rapidly in developed countries, but changes are also occurring in developing countries. Some of the changes may permit improved maintenance on the higher technology equipment in developing countries, since they do not require onsite expertise. Technology has had an increasing impact on the development of medical equipment with the increased use of microprocessors and computers. With miniaturization from space technology and electronic chip design, powerful microprocessors and computers have been built into medical equipment. The improvement in manufacturing technology has increased the quality of parts and therefore the medical equipment. This has resulted in increased mean time between failures and reduced maintenance needs. This has made equipment more reliable in remote areas and developing countries. The built-in computers and advances in software design have brought about self-diagnostics in medical equipment. The technicians now have a strong tool to be used in maintenance. One problem in this area is getting access to the self-diagnostics. Some manufacturers will not readily provide this access to the owner of the equipment. Advances in telecommunications in conjunction with self-diagnostics make available remote diagnosis and repair. Since components can no longer be repaired, a remote repair technician can instruct an operator or an on-site repairman on board replacement. In case of software problems, the remote repair technician may perform the repairs over the telephone. It is possible for the equipment to be monitored remotely by modern without interfering with the operation of the equipment. These changes in technology require the training of biomedical engineering technicians (BMETs) to change. They must have training in computers and telecommunications. Some of this training can be done with telecommunications and computers.
Bibliographic Good vs. Evil in Buffy the Vampire Slayer.
ERIC Educational Resources Information Center
DeCandido, GraceAnne A.
1999-01-01
Describes the pivotal role of the school librarian in the television series, "Buffy the Vampire Slayer." Discusses the image of librarians; the power of knowledge; information-seeking behavior; the methodical nature of research; the importance of printed materials; issues with computers and online technology; and censorship and…
Variable gravity research facility
NASA Technical Reports Server (NTRS)
Allan, Sean; Ancheta, Stan; Beine, Donna; Cink, Brian; Eagon, Mark; Eckstein, Brett; Luhman, Dan; Mccowan, Daniel; Nations, James; Nordtvedt, Todd
1988-01-01
Spin and despin requirements; sequence of activities required to assemble the Variable Gravity Research Facility (VGRF); power systems technology; life support; thermal control systems; emergencies; communication systems; space station applications; experimental activities; computer modeling and simulation of tether vibration; cost analysis; configuration of the crew compartments; and tether lengths and rotation speeds are discussed.
Code of Federal Regulations, 2011 CFR
2011-01-01
... integration of systems, technologies, programs, equipment, supporting processes, and implementing procedures...-in-depth methodologies to minimize the potential for an insider to adversely affect, either directly... protection of digital computer and communication systems and networks. (ii) Site-specific conditions that...
Power, Voice and Democratization: Feminist Pedagogy and Assessment in CMC.
ERIC Educational Resources Information Center
Campbell, Katy
2002-01-01
Discussion of differences in the ways that female and male faculty approach the use of technology in teaching focuses on a study of female faculty learning design preferences in critical feminist teaching. Considers learner-centered approaches; interactions with students in computer-mediated communication (CMC); democratization in online…
Things the Teacher of Your Media Utilization Course May Not Have Told You.
ERIC Educational Resources Information Center
Ekhaml, Leticia
1995-01-01
Discusses maintenance and safety information that may not be covered in a technology training program. Topics include computers, printers, televisions, video and audio equipment, electric roll laminators, overhead and slide projectors, equipment carts, power cords and outlets, batteries, darkrooms, barcode readers, Liquid Crystal Display units,…
Status Report on Image Information Systems and Image Data Base Technology
1989-12-01
PowerHouse, StarGate , StarNet. Significant Recent Developments: Acceptance major teaching Universities (Australia), U.S.A.F. Major Corporations. Future...scenario, all computers must be VAX). STARBASE StarBase StarNet, (Network server), StarBase StarGate , (SQL gateway). SYBASE Sybase is an inherently
Application of automatic image analysis in wood science
Charles W. McMillin
1982-01-01
In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...
Photonic-Enabled RF Canceller with Tunable Time-Delay Taps
2016-12-05
ports indicated in Fig. 1. The analyzer was configured to sweep 10 MHz to 6 GHz with +10 dBm of output power , and compute the time-domain transmission ...Laboratory Lexington, Massachusetts, USA Abstract—Future 5G wireless networks can benefit from the use of in-band full-duplex technologies that allow access...microwave photonics, RF cancellation. I. INTRODUCTION In-Band Full-Duplex (IBFD) technologies are being consid- ered for 5th generation (5G) wireless
Automated array assembly task, phase 1
NASA Technical Reports Server (NTRS)
Carbajal, B. G.
1977-01-01
State-of-the-art technologies applicable to silicon solar cell and solar cell module fabrication were assessed. The assessment consisted of a technical feasibility evaluation and a cost projection for high volume production of solar cell modules. Design equations based on minimum power loss were used as a tool in the evaluation of metallization technologies. A solar cell process sensitivity study using models, computer calculations, and experimental data was used to identify process step variation and cell output variation correlations.
Designing Nanoscale Counter Using Reversible Gate Based on Quantum-Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Moharrami, Elham; Navimipour, Nima Jafari
2018-04-01
Some new technologies such as Quantum-dot Cellular Automata (QCA) is suggested to solve the physical limits of the Complementary Metal-Oxide Semiconductor (CMOS) technology. The QCA as one of the novel technologies at nanoscale has potential applications in future computers. This technology has some advantages such as minimal size, high speed, low latency, and low power consumption. As a result, it is used for creating all varieties of memory. Counter circuits as one of the important circuits in the digital systems are composed of some latches, which are connected to each other in series and actually they count input pulses in the circuit. On the other hand, the reversible computations are very important because of their ability in reducing energy in nanometer circuits. Improving the energy efficiency, increasing the speed of nanometer circuits, increasing the portability of system, making smaller components of the circuit in a nuclear size and reducing the power consumption are considered as the usage of reversible logic. Therefore, this paper aims to design a two-bit reversible counter that is optimized on the basis of QCA using an improved reversible gate. The proposed reversible structure of 2-bit counter can be increased to 3-bit, 4-bit and more. The advantages of the proposed design have been shown using QCADesigner in terms of the delay in comparison with previous circuits.
Computational Nanoelectronics and Nanotechnology at NASA ARC
NASA Technical Reports Server (NTRS)
Saini, Subhash; Kutler, Paul (Technical Monitor)
1998-01-01
Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.
CBESW: sequence alignment on the Playstation 3.
Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil
2008-09-17
The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. For large datasets, our implementation on the PlayStation 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. The results from our experiments demonstrate that the PlayStation 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications.
CBESW: Sequence Alignment on the Playstation 3
Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil
2008-01-01
Background The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation® 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. Results For large datasets, our implementation on the PlayStation® 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. Conclusion The results from our experiments demonstrate that the PlayStation® 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications. PMID:18798993
Security and Cloud Outsourcing Framework for Economic Dispatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
Security and Cloud Outsourcing Framework for Economic Dispatch
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...
2017-04-24
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
NASA Astrophysics Data System (ADS)
Perry, Alexander R.
2002-06-01
Low Frequency Eddy Current (EC) probes are capable of measurement from 5 MHz down to DC through the use of Magnetoresistive (MR) sensors. Choosing components with appropriate electrical specifications allows them to be matched to the power and impedance characteristics of standard computer connectors. This permits direct attachment of the probe to inexpensive computers, thereby eliminating external power supplies, amplifiers and modulators that have heretofore precluded very low system purchase prices. Such price reduction is key to increased market penetration in General Aviation maintenance and consequent reduction in recurring costs. This paper examines our computer software CANDETECT, which implements this approach and permits effective probe operation. Results are presented to show the intrinsic sensitivity of the software and demonstrate its practical performance when seeking cracks in the underside of a thick aluminum multilayer structure. The majority of the General Aviation light aircraft fleet uses rivets and screws to attach sheet aluminum skin to the airframe, resulting in similar multilayer lap joints.
Physics and Entrepreneurship: A Small Business Perspective
NASA Astrophysics Data System (ADS)
Cleveland, Jason
2013-03-01
DARPA's Microsystems Technology Office, MTO, conceives and develops a wide range of technologies to benefit the US warfighter, from exotic GaN transistors to high-power fiber lasers, highly efficient embedded computer systems to synthetic biology. MTO has world class electrical and mechanical engineers, but we also have a cadre of extremely capable physicists, whose complementary skillset has been absolutely essential to identifying promising technological avenues for the office and for the agency. In this talk I will explain the DARPA model of technology development, using real examples from MTO, highlighting programs where physics-based insights have led to important new capabilities for the Dept of Defense.
The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update
NASA Astrophysics Data System (ADS)
Flatley, T.
2012-12-01
SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;
Computational Approaches to Phenotyping
Lussier, Yves A.; Liu, Yang
2007-01-01
The recent completion of the Human Genome Project has made possible a high-throughput “systems approach” for accelerating the elucidation of molecular underpinnings of human diseases, and subsequent derivation of molecular-based strategies to more effectively prevent, diagnose, and treat these diseases. Although altered phenotypes are among the most reliable manifestations of altered gene functions, research using systematic analysis of phenotype relationships to study human biology is still in its infancy. This article focuses on the emerging field of high-throughput phenotyping (HTP) phenomics research, which aims to capitalize on novel high-throughput computation and informatics technology developments to derive genomewide molecular networks of genotype–phenotype associations, or “phenomic associations.” The HTP phenomics research field faces the challenge of technological research and development to generate novel tools in computation and informatics that will allow researchers to amass, access, integrate, organize, and manage phenotypic databases across species and enable genomewide analysis to associate phenotypic information with genomic data at different scales of biology. Key state-of-the-art technological advancements critical for HTP phenomics research are covered in this review. In particular, we highlight the power of computational approaches to conduct large-scale phenomics studies. PMID:17202287
Carmichael, Clare; Carmichael, Patrick
2014-01-01
This paper highlights aspects related to current research and thinking about ethical issues in relation to Brain Computer Interface (BCI) and Brain-Neuronal Computer Interfaces (BNCI) research through the experience of one particular project, BrainAble, which is exploring and developing the potential of these technologies to enable people with complex disabilities to control computers. It describes how ethical practice has been developed both within the multidisciplinary research team and with participants. The paper presents findings in which participants shared their views of the project prototypes, of the potential of BCI/BNCI systems as an assistive technology, and of their other possible applications. This draws attention to the importance of ethical practice in projects where high expectations of technologies, and representations of "ideal types" of disabled users may reinforce stereotypes or drown out participant "voices". Ethical frameworks for research and development in emergent areas such as BCI/BNCI systems should be based on broad notions of a "duty of care" while being sufficiently flexible that researchers can adapt project procedures according to participant needs. They need to be frequently revisited, not only in the light of experience, but also to ensure they reflect new research findings and ever more complex and powerful technologies.
Experimental Validation of a Closed Brayton Cycle System Transient Simulation
NASA Technical Reports Server (NTRS)
Johnson, Paul K.; Hervol, David S.
2006-01-01
The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, Ohio was used to validate the results of a computational code known as Closed Cycle System Simulation (CCSS). Conversion system thermal transient behavior was the focus of this validation. The BPCU was operated at various steady state points and then subjected to transient changes involving shaft rotational speed and thermal energy input. These conditions were then duplicated in CCSS. Validation of the CCSS BPCU model provides confidence in developing future Brayton power system performance predictions, and helps to guide high power Brayton technology development.
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
Small solar thermal power systems (up to 10 MWe in size) were tested. The solar thermal power plant ranking study was performed to aid in experiment activity and support decisions for the selection of the most appropriate technological approach. The cost and performance were determined for insolation conditions by utilizing the Solar Energy Simulation computer code (SESII). This model optimizes the size of the collector field and energy storage subsystem for given engine generator and energy transport characteristics. The development of the simulation tool, its operation, and the results achieved from the analysis are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang
The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less
Use of Soft Computing Technologies For Rocket Engine Control
NASA Technical Reports Server (NTRS)
Trevino, Luis C.; Olcmen, Semih; Polites, Michael
2003-01-01
The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to further improve overall engine system reliability and performance. Specifically, this will be presented by enhancing rocket engine control and engine health management (EHM) using SCT coupled with conventional control technologies, and sound software engineering practices used in Marshall s Flight Software Group. The principle goals are to improve software management, software development time and maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control and EHM methodologies, but to provide alternative design choices for control, EHM, implementation, performance, and sustaining engineering. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion, software engineering for embedded systems, and soft computing technologies (i.e., neural networks, fuzzy logic, and Bayesian belief networks), much of which is presented in this paper. The first targeted demonstration rocket engine platform is the MC-1 (formerly FASTRAC Engine) which is simulated with hardware and software in the Marshall Avionics & Software Testbed laboratory that
End-to-end plasma bubble PIC simulations on GPUs
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava
2017-10-01
Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).
Computational modeling of cardiac hemodynamics: Current status and future outlook
NASA Astrophysics Data System (ADS)
Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.
2016-01-01
The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
Savant Genome Browser 2: visualization and analysis for population-scale genomics.
Fiume, Marc; Smith, Eric J M; Brook, Andrew; Strbenac, Dario; Turner, Brian; Mezlini, Aziz M; Robinson, Mark D; Wodak, Shoshana J; Brudno, Michael
2012-07-01
High-throughput sequencing (HTS) technologies are providing an unprecedented capacity for data generation, and there is a corresponding need for efficient data exploration and analysis capabilities. Although most existing tools for HTS data analysis are developed for either automated (e.g. genotyping) or visualization (e.g. genome browsing) purposes, such tools are most powerful when combined. For example, integration of visualization and computation allows users to iteratively refine their analyses by updating computational parameters within the visual framework in real-time. Here we introduce the second version of the Savant Genome Browser, a standalone program for visual and computational analysis of HTS data. Savant substantially improves upon its predecessor and existing tools by introducing innovative visualization modes and navigation interfaces for several genomic datatypes, and synergizing visual and automated analyses in a way that is powerful yet easy even for non-expert users. We also present a number of plugins that were developed by the Savant Community, which demonstrate the power of integrating visual and automated analyses using Savant. The Savant Genome Browser is freely available (open source) at www.savantbrowser.com.
Savant Genome Browser 2: visualization and analysis for population-scale genomics
Smith, Eric J. M.; Brook, Andrew; Strbenac, Dario; Turner, Brian; Mezlini, Aziz M.; Robinson, Mark D.; Wodak, Shoshana J.; Brudno, Michael
2012-01-01
High-throughput sequencing (HTS) technologies are providing an unprecedented capacity for data generation, and there is a corresponding need for efficient data exploration and analysis capabilities. Although most existing tools for HTS data analysis are developed for either automated (e.g. genotyping) or visualization (e.g. genome browsing) purposes, such tools are most powerful when combined. For example, integration of visualization and computation allows users to iteratively refine their analyses by updating computational parameters within the visual framework in real-time. Here we introduce the second version of the Savant Genome Browser, a standalone program for visual and computational analysis of HTS data. Savant substantially improves upon its predecessor and existing tools by introducing innovative visualization modes and navigation interfaces for several genomic datatypes, and synergizing visual and automated analyses in a way that is powerful yet easy even for non-expert users. We also present a number of plugins that were developed by the Savant Community, which demonstrate the power of integrating visual and automated analyses using Savant. The Savant Genome Browser is freely available (open source) at www.savantbrowser.com. PMID:22638571
Utilization of Virtual Server Technology in Mission Operations
NASA Technical Reports Server (NTRS)
Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.
2010-01-01
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
Creating a Rackspace and NASA Nebula compatible cloud using the OpenStack project (Invited)
NASA Astrophysics Data System (ADS)
Clark, R.
2010-12-01
NASA and Rackspace have both provided technology to the OpenStack that allows anyone to create a private Infrastructure as a Service (IaaS) cloud using open source software and commodity hardware. OpenStack is designed and developed completely in the open and with an open governance process. NASA donated Nova, which powers the compute portion of NASA Nebula Cloud Computing Platform, and Rackspace donated Swift, which powers Rackspace Cloud Files. The project is now in continuous development by NASA, Rackspace, and hundreds of other participants. When you create a private cloud using Openstack, you will have the ability to easily interact with your private cloud, a government cloud, and an ecosystem of public cloud providers, using the same API.
Integrated solar energy system optimization
NASA Astrophysics Data System (ADS)
Young, S. K.
1982-11-01
The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
Elucidating reaction mechanisms on quantum computers.
Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias
2017-07-18
With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.
Elucidating reaction mechanisms on quantum computers
Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias
2017-01-01
With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011
Elucidating reaction mechanisms on quantum computers
NASA Astrophysics Data System (ADS)
Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias
2017-07-01
With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.
Design Tools for Reconfigurable Hardware in Orbit (RHinO)
NASA Technical Reports Server (NTRS)
French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian
2004-01-01
The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.
An Internet of Things Approach to Electrical Power Monitoring and Outage Reporting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Daniel B
The so-called Internet of Things concept has captured much attention recently as ordinary devices are connected to the Internet for monitoring and control purposes. One enabling technology is the proliferation of low-cost, single board computers with built-in network interfaces. Some of these are capable of hosting full-fledged operating systems that provide rich programming environments. Taken together, these features enable inexpensive solutions for even traditional tasks such as the one presented here for electrical power monitoring and outage reporting.
NASA Astrophysics Data System (ADS)
Gould, C. A.; Shammas, N. Y. A.; Grainger, S.; Taylor, I.; Simpson, K.
2012-06-01
This paper documents the 3D modeling and simulation of a three couple thermoelectric module using the Synopsys Technology Computer Aided Design (TCAD) semiconductor simulation software. Simulation results are presented for thermoelectric power generation, cooling and heating, and successfully demonstrate the basic thermoelectric principles. The 3D TCAD simulation model of a three couple thermoelectric module can be used in the future to evaluate different thermoelectric materials, device structures, and improve the efficiency and performance of thermoelectric modules.
NASA Advanced Supercomputing Facility Expansion
NASA Technical Reports Server (NTRS)
Thigpen, William W.
2017-01-01
The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.
Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping
2014-07-04
Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing.
Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping
2014-01-01
Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing. PMID:24993440
Design of a fault-tolerant reversible control unit in molecular quantum-dot cellular automata
NASA Astrophysics Data System (ADS)
Bahadori, Golnaz; Houshmand, Monireh; Zomorodi-Moghadam, Mariam
Quantum-dot cellular automata (QCA) is a promising emerging nanotechnology that has been attracting considerable attention due to its small feature size, ultra-low power consuming, and high clock frequency. Therefore, there have been many efforts to design computational units based on this technology. Despite these advantages of the QCA-based nanotechnologies, their implementation is susceptible to a high error rate. On the other hand, using the reversible computing leads to zero bit erasures and no energy dissipation. As the reversible computation does not lose information, the fault detection happens with a high probability. In this paper, first we propose a fault-tolerant control unit using reversible gates which improves on the previous design. The proposed design is then synthesized to the QCA technology and is simulated by the QCADesigner tool. Evaluation results indicate the performance of the proposed approach.
Parallel algorithm for computation of second-order sequential best rotations
NASA Astrophysics Data System (ADS)
Redif, Soydan; Kasap, Server
2013-12-01
Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.
Technology in the teaching of neuroscience: enhanced student learning.
Griffin, John D
2003-12-01
The primary motivation for integrating any form of education technology into a particular course or curriculum should always be to enhance student learning. However, it can be difficult to determine which technologies will be the most appropriate and effective teaching tools. Through the alignment of technology-enhanced learning experiences with a clear set of learning objectives, teaching becomes more efficient and effective and learning is truly enhanced. In this article, I describe how I have made extensive use of technology in two neuroscience courses that differ in structure and content. Course websites function as resource centers and provide a forum for student interaction. PowerPoint presentations enhance formal lectures and provide an organized outline of presented material. Some lectures are also supplemented with interactive CD-ROMs, used in the presentation of difficult physiological concepts. In addition, a computer-based physiological recording system is used in laboratory sessions, improving the hands-on experience of group learning while reinforcing the concepts of the research method. Although technology can provide powerful teaching tools, the enhancement of the learning environment is still dependent on the instructor. It is the skill and enthusiasm of the instructor that determines whether technology will be used effectively.
NASA Technical Reports Server (NTRS)
Moses, Robert W.; Kuhl, Christopher A.; Templeton, Justin D.
2005-01-01
NASA's exploration goals for Mars and Beyond will require new power systems and in situ resource utilization (ISRU) technologies. Regenerative aerobraking may offer a revolutionary approach for in situ power generation and oxygen harvesting during these exploration missions. In theory, power and oxygen can be collected during aerobraking and stored for later use in orbit or on the planet. This technology would capture energy and oxygen from the plasma field that occurs naturally during hypersonic entry using well understood principles of magnetohydrodynamics and oxygen filtration. This innovative approach generates resources upon arrival at the operational site, and thus greatly differs from the traditional approach of taking everything you need with you from Earth. Fundamental analysis, computational fluid dynamics, and some testing of experimental hardware have established the basic feasibility of generating power during a Mars entry. Oxygen filtration at conditions consistent with spacecraft entry parameters at Mars has been studied to a lesser extent. Other uses of the MHD power are presented. This paper illustrates how some features of regenerative aerobraking may be applied to support human and robotic missions at Mars.
Research on Collaborative Technology in Distributed Virtual Reality System
NASA Astrophysics Data System (ADS)
Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi
2018-01-01
Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.
Niamtu, Joseph
2004-01-01
Cosmetic surgery and photography are inseparable. Clinical photographs serve as diagnostic aids, medical records, legal protection, and marketing tools. In the past, taking high-quality, standardized images and maintaining and using them for presentations were tasks of significant proportion when done correctly. Although the cosmetic literature is replete with articles on standardized photography, this has eluded many practitioners in part to the complexity. A paradigm shift has occurred in the past decade, and digital technology has revolutionized clinical photography and presentations. Digital technology has made it easier than ever to take high-quality, standardized images and to use them in a multitude of ways to enhance the practice of cosmetic surgery. PowerPoint presentations have become the standard for academic presentations, but many pitfalls exist, especially when taking a backup disc to play on an alternate computer at a lecture venue. Embracing digital technology has a mild to moderate learning curve but is complicated by old habits and holdovers from the days of slide photography, macro lenses, and specialized flashes. Discussion is presented to circumvent common problems involving computer glitches with PowerPoint presentations. In the past, high-quality clinical photography was complex and sometimes beyond the confines of a busy clinical practice. The digital revolution of the past decade has removed many of these associated barriers, and it has never been easier or more affordable to take images and use them in a multitude of ways for learning, judging surgical outcomes, teaching and lecturing, and marketing. Even though this technology has existed for years, many practitioners have failed to embrace it for various reasons or fears. By following a few simple techniques, even the most novice practitioner can be on the forefront of digital imaging technology. By observing a number of modified techniques with digital cameras, any practitioner can take high-quality, standardized clinical photographs and can make and use these images to enhance his or her practice. This article deals with common pitfalls of digital photography and PowerPoint presentations and presents multiple pearls to achieve proficiency quickly with digital photography and imaging as well as avoid malfunction of PowerPoint presentations in an academic lecture venue.
Efficiency improvements in US Office equipment: Expected policy impacts and uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koomey, J.G.; Cramer, M.; Piette, M.A.
This report describes a detailed end-use forecast of office equipment energy use for the US commercial sector. We explore the likely impacts of the US Environmental Protection Agency`s ENERGY STAR office equipment program and the potential impacts of advanced technologies. The ENERGY STAR program encourages manufacturers to voluntarily incorporate power saving features into personal computers, monitors, printers, copiers, and fax machines in exchange for allowing manufacturers to use the EPA ENERGY STAR logo in their advertising campaigns. The Advanced technology case assumes that the most energy efficient current technologies are implemented regardless of cost.
Design, Specification, and Synthesis of Aircraft Electric Power Systems Control Logic
NASA Astrophysics Data System (ADS)
Xu, Huan
Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, actuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based specifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considerations for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area. This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller. The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is explored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, W.; Green, J.
2001-01-01
The purpose of this research was to determine the optimal configuration of home power systems relevant to different regions in the United States. The hypothesis was that, regardless of region, the optimal system would be a hybrid incorporating wind technology, versus a photovoltaic hybrid system without the use of wind technology. The method used in this research was HOMER, the Hybrid Optimization Model for Electric Renewables. HOMER is a computer program that optimizes electrical configurations under user-defined circumstances. According to HOMER, the optimal system for the four regions studied (Kansas, Massachusetts, Oregon, and Arizona) was a hybrid incorporating wind technology.more » The cost differences between these regions, however, were dependent upon regional renewable resources. Future studies will be necessary, as it is difficult to estimate meteorological impacts for other regions.« less
A high voltage dielectrically isolated smart power technology based on silicon direct bonding
NASA Astrophysics Data System (ADS)
Macary, Veronique
1992-09-01
The feasibility of a dielectrically isolated technology based on the silicon direct bonding technique, for high voltage smart power applications in the 1000 to 1550 V/1 to 20 A range, where a vertical power switch is necessary, is investigated and demonstrated. Static and dynamic isolation of the low voltage circuitry integrated beside the vertical power transistor is the main concern of this family of circuits. The dielectric isolation offers better protection to the low voltage part than does the junction isolation, because of the elimination of the parasitic bipolar transistor inherent to the latter isolation technique. Silicon direct bonding provides a cost effective way to obtain a buried oxide isolation layer. In addition, the application requires a Si/Si bonded area in the active region of the vertical power switch. Strong influence of the prebonding cleaning in the electrical characteristics of the Si/Si interface is pointed out, and presence of crystalline defects is assumed to be at the origin of electrical failures. The main problems of silicon direct bonding process compatibility with standard processes were overcome, and a complete process flow, including the simultaneous integration of a vertical power bipolar transistor together with a bipolar control circuitry, was validated. Using a peripheral biased ring is shown to provide an easy way to optimize high voltage termination for the smart power circuit, while adding a non-additional technological step. This technique was studied by dimensional electrical simulations (BIDIM2 software), as well as analytically computed.
The environment power system analysis tool development program
NASA Technical Reports Server (NTRS)
Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Stevens, N. John; Putnam, Rand M.; Roche, James C.; Wilcox, Katherine G.
1990-01-01
The Environment Power System Analysis Tool (EPSAT) is being developed to provide space power system design engineers with an analysis tool for determining system performance of power systems in both naturally occurring and self-induced environments. The program is producing an easy to use computer aided engineering (CAE) tool general enough to provide a vehicle for technology transfer from space scientists and engineers to power system design engineers. The results of the project after two years of a three year development program are given. The EPSAT approach separates the CAE tool into three distinct functional units: a modern user interface to present information, a data dictionary interpreter to coordinate analysis; and a data base for storing system designs and results of analysis.
Banks, Jim
2015-01-01
The brain contains all that makes us human, but its complexity is the source of both inspiration and frailty. Aging population is increasingly in need of effective care and therapies for brain diseases, including stroke, Parkinson's disease and Alzheimer's disease. The world's scientific community working hard to unravel the secrets of the brain's computing power and to devise technologies that can heal it when it fails and restore critical functions to patients with neurological conditions. Neurotechnology is the emerging field that brings together the development of technologies to study the brain and devices that improve and repair brain function. What is certain is the momentum behind neurotechnological research is building, and whether through implants, BCIs, or innovative computational systems inspired by the human brain, more light will be shed on our most complex and most precious organ, which will no doubt lead to effective treatment for many neurological conditions.
NASA Astrophysics Data System (ADS)
Sanford, James L.; Schlig, Eugene S.; Prache, Olivier; Dove, Derek B.; Ali, Tariq A.; Howard, Webster E.
2002-02-01
The IBM Research Division and eMagin Corp. jointly have developed a low-power VGA direct view active matrix OLED display, fabricated on a crystalline silicon CMOS chip. The display is incorporated in IBM prototype wristwatch computers running the Linus operating system. IBM designed the silicon chip and eMagin developed the organic stack and performed the back-end-of line processing and packaging. Each pixel is driven by a constant current source controlled by a CMOS RAM cell, and the display receives its data from the processor memory bus. This paper describes the OLED technology and packaging, and outlines the design of the pixel and display electronics and the processor interface. Experimental results are presented.
Development of dialog system powered by textual educational content
NASA Astrophysics Data System (ADS)
Bisikalo, Oleg V.; Dovgalets, Sergei M.; Pijarski, Paweł; Lisovenko, Anna I.
2016-09-01
The advances in computer technology require an interconnection between a man and computer, more specifically, between complex information systems. The paper is therefore dedicated to creation of dialog systems, able to respond to users depending on the implemented textual educational content. To support the dialog there had been suggested the knowledge base model on the basis of the unit and a fuzzy sense relation. Lexical meanings is taken out from the text by processing the syntactic links between the allologs of all the sentences and the answer shall be generated as the composition of a fuzzy ratios upon the formal criterion. The information support technology had been put to an evaluation sample test, which demonstrates the combination of information from some sentences in the final response.
Development of natural gas rotary engines
NASA Astrophysics Data System (ADS)
Mack, J. R.
1991-08-01
Development of natural gas-fueled rotary engines was pursued on the parallel paths of converted Mazda automotive engines and of establishing technology and demonstration of a test model of a larger John Deer Technologies Incorporated (JDTI) rotary engine with power capability of 250 HP per power section for future production of multi-rotor engines with power ratings 250, 500, and 1000 HP and upward. Mazda engines were converted to natural gas and were characterized by a laboratory which was followed by nearly 12,000 hours of testing in three different field installations. To develop technology for the larger JDTI engine, laboratory and engine materials testing was accomplished. Extensive combustion analysis computer codes were modified, verified, and utilized to predict engine performance, to guide parameters for actual engine design, and to identify further improvements. A single rotor test engine of 5.8 liter displacement was designed for natural gas operation based on the JDTI 580 engine series. This engine was built and tested. It ran well and essentially achieved predicted performance. Lean combustion and low NOW emission were demonstrated.
Really Large Scale Computer Graphic Projection Using Lasers and Laser Substitutes
NASA Astrophysics Data System (ADS)
Rother, Paul
1989-07-01
This paper reflects on past laser projects to display vector scanned computer graphic images onto very large and irregular surfaces. Since the availability of microprocessors and high powered visible lasers, very large scale computer graphics projection have become a reality. Due to the independence from a focusing lens, lasers easily project onto distant and irregular surfaces and have been used for amusement parks, theatrical performances, concert performances, industrial trade shows and dance clubs. Lasers have been used to project onto mountains, buildings, 360° globes, clouds of smoke and water. These methods have proven successful in installations at: Epcot Theme Park in Florida; Stone Mountain Park in Georgia; 1984 Olympics in Los Angeles; hundreds of Corporate trade shows and thousands of musical performances. Using new ColorRayTM technology, the use of costly and fragile lasers is no longer necessary. Utilizing fiber optic technology, the functionality of lasers can be duplicated for new and exciting projection possibilities. The use of ColorRayTM technology has enjoyed worldwide recognition in conjunction with Pink Floyd and George Michaels' world wide tours.
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
[Intelligent watch system for health monitoring based on Bluetooth low energy technology].
Wang, Ji; Guo, Hailiang; Ren, Xiaoli
2017-08-01
According to the development status of wearable technology and the demand of intelligent health monitoring, we studied the multi-function integrated smart watches solution and its key technology. First of all, the sensor technology with high integration density, Bluetooth low energy (BLE) and mobile communication technology were integrated and used in develop practice. Secondly, for the hardware design of the system in this paper, we chose the scheme with high integration density and cost-effective computer modules and chips. Thirdly, we used real-time operating system FreeRTOS to develop the friendly graphical interface interacting with touch screen. At last, the high-performance application software which connected with BLE hardware wirelessly and synchronized data was developed based on android system. The function of this system included real-time calendar clock, telephone message, address book management, step-counting, heart rate and sleep quality monitoring and so on. Experiments showed that the collecting data accuracy of various sensors, system data transmission capacity, the overall power consumption satisfy the production standard. Moreover, the system run stably with low power consumption, which could realize intelligent health monitoring effectively.
MorphoHawk: Geometric-based Software for Manufacturing and More
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keith Arterburn
2001-04-01
Hollywood movies portray facial recognition as a perfected technology, but reality is that sophisticated computers and algorithmic calculations are far from perfect. In fact, the most sophisticated and successful computer for recognizing faces and other imagery already is the human brain with more than 10 billion nerve cells. Beginning at birth, humans process data and connect optical and sensory experiences that create unparalleled accumulation of data for people to associate images with life experiences, emotions and knowledge. Computers are powerful, rapid and tireless, but still cannot compare to the highly sophisticated relational calculations and associations that the human computer canmore » produce in connecting ‘what we see with what we know.’« less
Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2012)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David C.; Syamlal, Madhava; Cottrell, Roger
2012-09-30
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools asmore » necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is organized into 8 technical elements that fall under two focus areas. The first focus area (Physicochemical Models and Data) addresses the steps necessary to model and simulate the various technologies and processes needed to bring a new Carbon Capture and Storage (CCS) technology into production. The second focus area (Analysis & Software) is developing the software infrastructure to integrate the various components and implement the tools that are needed to make quantifiable decisions regarding the viability of new CCS technologies. CCSI also has an Industry Advisory Board (IAB). By working closely with industry from the inception of the project to identify industrial challenge problems, CCSI ensures that the simulation tools are developed for the carbon capture technologies of most relevance to industry. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories' core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI's industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI's academic participants (Carnegie Mellon University, Princeton University, West Virginia University, and Boston University) bring unparalleled expertise in multiphase flow reactors, combustion, process synthesis and optimization, planning and scheduling, and process control techniques for energy processes. During Fiscal Year (FY) 12, CCSI released its first set of computational tools and models. This pre-release, a year ahead of the originally planned first release, is the result of intense industry interest in getting early access to the tools and the phenomenal progress of the CCSI technical team. These initial components of the CCSI Toolset provide new models and computational capabilities that will accelerate the commercial development of carbon capture technologies as well as related technologies, such as those found in the power, refining, chemicals, and gas production industries. The release consists of new tools for process synthesis and optimization to help identify promising concepts more quickly, new physics-based models of potential capture equipment and processes that will reduce the time to design and troubleshoot new systems, a framework to quantify the uncertainty of model predictions, and various enabling tools that provide new capabilities such as creating reduced order models (ROMs) from reacting multiphase flow simulations and running thousands of process simulations concurrently for optimization and UQ.« less
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
Computer usage and national energy consumption: Results from a field-metering study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desroches, Louis-Benoit; Fuchs, Heidi; Greenblatt, Jeffery
The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Baymore » Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power of power supplies to computing needs, and improving the efficiency of individual components.« less
NASA/DOD Control/Structures Interaction Technology, 1986
NASA Technical Reports Server (NTRS)
Wright, Robert L. (Compiler)
1986-01-01
Control/structures interactions, deployment dynamics and system performance of large flexible spacecraft are discussed. Spacecraft active controls, deployable truss structures, deployable antennas, solar power systems for space stations, pointing control systems for space station gimballed payloads, computer-aided design for large space structures, and passive damping for flexible structures are among the topics covered.
Mobile Technologies for the Surgical Pathologist.
Hartman, Douglas J
2015-06-01
Recent advances in hardware and computing power contained within mobile devices have made it possible to use these devices to improve and enhance pathologist workflow. This article discusses the possible uses ranging from basic functions to intermediate functions to advanced functions. Barriers to implementation are also discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
High Tech/High Touch: A Synergy Applicable to Career Development.
ERIC Educational Resources Information Center
Pyle, K. Richard
1985-01-01
A method for using group counseling to enhance the learning and personal satisfaction of computer-assisted career guidance is discussed. The author states that this combination of the human and the technological element appears to have real power in assisting individuals to increase significantly their career maturity in a relatively short period…
ERIC Educational Resources Information Center
Simpson, Amber; Bannister, Nicole; Matthews, Gretchen
2017-01-01
There is a positive relationship between student participation in computer-supported collaborative learning (CSCL) environments and improved complex problem-solving strategies, increased learning gains, higher engagement in the thinking of their peers, and an enthusiastic disposition toward groupwork. However, student participation varies from…
The Power of Portals: Personalizing the Web To Build Community.
ERIC Educational Resources Information Center
Page, Dan
2001-01-01
Describes how the director of information systems for the computing and communications department and a team of software developers embarked on the task of creating and refining portal technology for a broad community of users with various relationships to the University of Washington. Discusses focus on individual needs; authentication, the…
ERIC Educational Resources Information Center
Levin, James A.; And Others
1987-01-01
The instructional media created by microcomputers interconnected by modems to form long-distance networks present some powerful new opportunities for education. While other uses of computers in education have been built on conventional instructional models of classroom interaction, instructional electronic networks facilitate a wider use of…
Energy requirement for the production of silicon solar arrays
NASA Technical Reports Server (NTRS)
Lindmayer, J.; Wihl, M.; Scheinine, A.; Morrison, A.
1977-01-01
An assessment of potential changes and alternative technologies which could impact the photovoltaic manufacturing process is presented. Topics discussed include: a multiple wire saw, ribbon growth techniques, silicon casting, and a computer model for a large-scale solar power plant. Emphasis is placed on reducing the energy demands of the manufacturing process.
Interactive Video in Training. Computers in Personnel--Making Management Profitable.
ERIC Educational Resources Information Center
Copeland, Peter
Interactive video is achieved by merging the two powerful technologies of microcomputing and video. Using television as the vehicle for display, text and diagrams, filmic images, and sound can be used separately or in combination to achieve a specific training task. An interactive program can check understanding, determine progress, and challenge…
Challenges for Educational Technologists in the 21st Century
ERIC Educational Resources Information Center
Mayes, Robin; Natividad, Gloria; Spector, J. Michael
2015-01-01
In 1972, Edsger Dijkstra claimed that computers had only introduced the new problem of learning to use them effectively. This is especially true in 2015 with regard to powerful new educational technologies. This article describes the challenges that 21st century educational technologists are, and will be, addressing as they undertake the effective…
A review of emerging non-volatile memory (NVM) technologies and applications
NASA Astrophysics Data System (ADS)
Chen, An
2016-11-01
This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.
Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, James H.; Cox, Philip; Harrington, William J
2013-09-03
ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focusedmore » on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel containment. PROJECT OVERVIEW The University of North Florida (UNF), with project partner the University of Florida, recently completed the Department of Energy (DOE) project entitled “Advanced Direct Methanol Fuel Cell for Mobile Computing”. The primary objective of the project was to advance portable fuel cell system technology towards the commercial targets as laid out in the DOE R&D roadmap by developing a 20-watt, direct methanol fuel cell (DMFC), portable power supply based on the UNF innovative “passive water recovery” MEA. Extensive component, sub-system, and system development and testing was undertaken to meet the rigorous demands of the consumer electronic application. Numerous brassboard (nonpackaged) systems were developed to optimize the integration process and facilitating control algorithm development. The culmination of the development effort was a fully-integrated, DMFC, power supply (referred to as DP4). The project goals were 40 W/kg for specific power, 55 W/l for power density, and 575 Whr/l for energy density. It should be noted that the specific power and power density were for the power section only, and did not include the hybrid battery. The energy density is based on three, 200 ml, fuel cartridges, and also did not include the hybrid battery. The results show that the DP4 system configured without the methanol concentration sensor exceeded all performance goals, achieving 41.5 W/kg for specific power, 55.3 W/l for power density, and 623 Whr/l for energy density. During the project, the DOE revised its technical targets, and the definition of many of these targets, for the portable power application. With this revision, specific power, power density, specific energy (Whr/kg), and energy density are based on the total system, including fuel tank, fuel, and hybridization battery. Fuel capacity is not defined, but the same value is required for all calculations. Test data showed that the DP4 exceeded all 2011 Technical Status values; for example, the DP4 energy density was 373 Whr/l versus the DOE 2011 status of 200 Whr/l. For the DOE 2013 Technical Goals, the operation time was increased from 10 hours to 14.3 hours. Under these conditions, the DP4 closely approached or surpassed the technical targets; for example, the DP4 achieved 468 Whr/l versus the goal of 500 Whr/l. Thus, UNF has successfully met the project goals. A fully-operational, 20-watt DMFC power supply was developed based on the UNF passive water recovery MEA. The power supply meets the project performance goals and advances portable power technology towards the commercialization targets set by the DOE.« less
A learnable parallel processing architecture towards unity of memory and computing
NASA Astrophysics Data System (ADS)
Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.
2015-08-01
Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.
A learnable parallel processing architecture towards unity of memory and computing.
Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J
2015-08-14
Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Wiese, Michael R.; Farr, Norma L.
2017-01-01
A computational study of a distributed electric propulsion wing with a 40deg flap deflection has been completed using FUN3D. Two lift-augmentation power conditions were compared with the power-off configuration on the high-lift wing (40deg flap) at a 73 mph freestream flow and for a range of angles of attack from -5 degrees to 14 degrees. The computational study also included investigating the benefit of corotating versus counter-rotating propeller spin direction to powered-lift performance. The results indicate a large benefit in lift coefficient, over the entire range of angle of attack studied, by using corotating propellers that all spin counter to the wingtip vortex. For the landing condition, 73 mph, the unpowered 40deg flap configuration achieved a maximum lift coefficient of 2.3. With high-lift blowing the maximum lift coefficient increased to 5.61. Therefore, the lift augmentation is a factor of 2.4. Taking advantage of the fullspan lift augmentation at similar performance means that a wing powered with the distributed electric propulsion system requires only 42 percent of the wing area of the unpowered wing. This technology will allow wings to be 'cruise optimized', meaning that they will be able to fly closer to maximum lift over drag conditions at the design cruise speed of the aircraft.
NASA Technical Reports Server (NTRS)
Krabach, Timothy
1998-01-01
Some of the many new and advanced exploration technologies which will enable space missions in the 21st century and specifically the Manned Mars Mission are explored in this presentation. Some of these are the system on a chip, the Computed-Tomography imaging Spectrometer, the digital camera on a chip, and other Micro Electro Mechanical Systems (MEMS) technology for space. Some of these MEMS are the silicon micromachined microgyroscope, a subliming solid micro-thruster, a micro-ion thruster, a silicon seismometer, a dewpoint microhygrometer, a micro laser doppler anemometer, and tunable diode laser (TDL) sensors. The advanced technology insertion is critical for NASA to decrease mass, volume, power and mission costs, and increase functionality, science potential and robustness.
Bio-Inspired Controller on an FPGA Applied to Closed-Loop Diaphragmatic Stimulation
Zbrzeski, Adeline; Bornat, Yannick; Hillen, Brian; Siu, Ricardo; Abbas, James; Jung, Ranu; Renaud, Sylvie
2016-01-01
Cervical spinal cord injury can disrupt connections between the brain respiratory network and the respiratory muscles which can lead to partial or complete loss of ventilatory control and require ventilatory assistance. Unlike current open-loop technology, a closed-loop diaphragmatic pacing system could overcome the drawbacks of manual titration as well as respond to changing ventilation requirements. We present an original bio-inspired assistive technology for real-time ventilation assistance, implemented in a digital configurable Field Programmable Gate Array (FPGA). The bio-inspired controller, which is a spiking neural network (SNN) inspired by the medullary respiratory network, is as robust as a classic controller while having a flexible, low-power and low-cost hardware design. The system was simulated in MATLAB with FPGA-specific constraints and tested with a computational model of rat breathing; the model reproduced experimentally collected respiratory data in eupneic animals. The open-loop version of the bio-inspired controller was implemented on the FPGA. Electrical test bench characterizations confirmed the system functionality. Open and closed-loop paradigm simulations were simulated to test the FPGA system real-time behavior using the rat computational model. The closed-loop system monitors breathing and changes in respiratory demands to drive diaphragmatic stimulation. The simulated results inform future acute animal experiments and constitute the first step toward the development of a neuromorphic, adaptive, compact, low-power, implantable device. The bio-inspired hardware design optimizes the FPGA resource and time costs while harnessing the computational power of spike-based neuromorphic hardware. Its real-time feature makes it suitable for in vivo applications. PMID:27378844
Mir Cooperative Solar Array Flight Performance Data and Computational Analysis
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.; Hoffman, David J.
1997-01-01
The Mir Cooperative Solar Array (MCSA) was developed jointly by the United States (US) and Russia to provide approximately 6 kW of photovoltaic power to the Russian space station Mir. The MCSA was launched to Mir in November 1995 and installed on the Kvant-1 module in May 1996. Since the MCSA photovoltaic panel modules (PPMs) are nearly identical to those of the International Space Station (ISS) photovoltaic arrays, MCSA operation offered an opportunity to gather multi-year performance data on this technology prior to its implementation on ISS. Two specially designed test sequences were executed in June and December 1996 to measure MCSA performance. Each test period encompassed 3 orbital revolutions whereby the current produced by the MCSA channels was measured. The temperature of MCSA PPMs was also measured. To better interpret the MCSA flight data, a dedicated FORTRAN computer code was developed to predict the detailed thermal-electrical performance of the MCSA. Flight data compared very favorably with computational performance predictions. This indicated that the MCSA electrical performance was fully meeting pre-flight expectations. There were no measurable indications of unexpected or precipitous MCSA performance degradation due to contamination or other causes after 7 months of operation on orbit. Power delivered to the Mir bus was lower than desired as a consequence of the retrofitted power distribution cabling. The strong correlation of experimental and computational results further bolsters the confidence level of performance codes used in critical ISS electric power forecasting. In this paper, MCSA flight performance tests are described as well as the computational modeling behind the performance predictions.
Tools of the Future: How Decision Tree Analysis Will Impact Mission Planning
NASA Technical Reports Server (NTRS)
Otterstatter, Matthew R.
2005-01-01
The universe is infinitely complex; however, the human mind has a finite capacity. The multitude of possible variables, metrics, and procedures in mission planning are far too many to address exhaustively. This is unfortunate because, in general, considering more possibilities leads to more accurate and more powerful results. To compensate, we can get more insightful results by employing our greatest tool, the computer. The power of the computer will be utilized through a technology that considers every possibility, decision tree analysis. Although decision trees have been used in many other fields, this is innovative for space mission planning. Because this is a new strategy, no existing software is able to completely accommodate all of the requirements. This was determined through extensive research and testing of current technologies. It was necessary to create original software, for which a short-term model was finished this summer. The model was built into Microsoft Excel to take advantage of the familiar graphical interface for user input, computation, and viewing output. Macros were written to automate the process of tree construction, optimization, and presentation. The results are useful and promising. If this tool is successfully implemented in mission planning, our reliance on old-fashioned heuristics, an error-prone shortcut for handling complexity, will be reduced. The computer algorithms involved in decision trees will revolutionize mission planning. The planning will be faster and smarter, leading to optimized missions with the potential for more valuable data.
Evaluating the Efficacy of the Cloud for Cluster Computation
NASA Technical Reports Server (NTRS)
Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom
2012-01-01
Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.
Computational Analysis of a Thermoelectric Generator for Waste-Heat Harvesting in Wearable Systems
NASA Astrophysics Data System (ADS)
Kossyvakis, D. N.; Vassiliadis, S. G.; Vossou, C. G.; Mangiorou, E. E.; Potirakis, S. M.; Hristoforou, E. V.
2016-06-01
Over recent decades, a constantly growing interest in the field of portable electronic devices has been observed. Recent developments in the scientific areas of integrated circuits and sensing technologies have enabled realization and design of lightweight low-power wearable sensing systems that can be of great use, especially for continuous health monitoring and performance recording applications. However, to facilitate wide penetration of such systems into the market, the issue of ensuring their seamless and reliable power supply still remains a major concern. In this work, the performance of a thermoelectric generator, able to exploit the temperature difference established between the human body and the environment, has been examined computationally using ANSYS 14.0 finite-element modeling (FEM) software, as a means for providing the necessary power to various portable electronic systems. The performance variation imposed due to different thermoelement geometries has been estimated to identify the most appropriate solution for the considered application. Furthermore, different ambient temperature and heat exchange conditions between the cold side of the generator and the environment have been investigated. The computational analysis indicated that power output in the order of 1.8 mW can be obtained by a 100-cm2 system, if specific design criteria can be fulfilled.
Design Techniques for Power-Aware Combinational Logic SER Mitigation
NASA Astrophysics Data System (ADS)
Mahatme, Nihaar N.
The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.
The emerging role of cloud computing in molecular modelling.
Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W
2013-07-01
There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.
Methodical and technological aspects of creation of interactive computer learning systems
NASA Astrophysics Data System (ADS)
Vishtak, N. M.; Frolov, D. A.
2017-01-01
The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.
Computing through Scientific Abstractions in SysBioPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Stephan, Eric G.; Gracio, Deborah K.
2004-10-13
Today, biologists and bioinformaticists have a tremendous amount of computational power at their disposal. With the availability of supercomputers, burgeoning scientific databases and digital libraries such as GenBank and PubMed, and pervasive computational environments such as the Grid, biologists have access to a wealth of computational capabilities and scientific data at hand. Yet, the rapid development of computational technologies has far exceeded the typical biologist’s ability to effectively apply the technology in their research. Computational sciences research and development efforts such as the Biology Workbench, BioSPICE (Biological Simulation Program for Intra-Cellular Evaluation), and BioCoRE (Biological Collaborative Research Environment) are importantmore » in connecting biologists and their scientific problems to computational infrastructures. On the Computational Cell Environment and Heuristic Entity-Relationship Building Environment projects at the Pacific Northwest National Laboratory, we are jointly developing a new breed of scientific problem solving environment called SysBioPSE that will allow biologists to access and apply computational resources in the scientific research context. In contrast to other computational science environments, SysBioPSE operates as an abstraction layer above a computational infrastructure. The goal of SysBioPSE is to allow biologists to apply computational resources in the context of the scientific problems they are addressing and the scientific perspectives from which they conduct their research. More specifically, SysBioPSE allows biologists to capture and represent scientific concepts and theories and experimental processes, and to link these views to scientific applications, data repositories, and computer systems.« less
A Remote Monitoring System for Voltage, Current, Power and Temperature Measurements
NASA Astrophysics Data System (ADS)
Barakat, E.; Sinno, N.; Keyrouz, C.
This paper presents a study and design of a monitoring system for the continuous measurement of electrical energy parameters such as voltage, current, power and temperature. This system is designed to monitor the data remotely over internet. The electronic power meter is based on a microcontroller from Microchip Technology Inc. PIC family. The design takes into consideration the correct operation in the event of an outage or brown out by recording the electrical values and the temperatures in EEPROM internally available in the microcontroller. Also a digital display is used to show the acquired measurements. A computer will remotely monitor the data over internet.
Heterogeneous real-time computing in radio astronomy
NASA Astrophysics Data System (ADS)
Ford, John M.; Demorest, Paul; Ransom, Scott
2010-07-01
Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
Single Cell Genomics: Approaches and Utility in Immunology
Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A
2017-01-01
Single cell genomics offers powerful tools for studying lymphocytes, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population-level. Advances in computer science and single cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single cell RNA-seq data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. PMID:28094102
An introduction to real-time graphical techniques for analyzing multivariate data
NASA Astrophysics Data System (ADS)
Friedman, Jerome H.; McDonald, John Alan; Stuetzle, Werner
1987-08-01
Orion I is a graphics system used to study applications of computer graphics - especially interactive motion graphics - in statistics. Orion I is the newest of a family of "Prim" systems, whose most striking common feature is the use of real-time motion graphics to display three dimensional scatterplots. Orion I differs from earlier Prim systems through the use of modern and relatively inexpensive raster graphics and microprocessor technology. It also delivers more computing power to its user; Orion I can perform more sophisticated real-time computations than were possible on previous such systems. We demonstrate some of Orion I's capabilities in our film: "Exploring data with Orion I".
Cloud computing for energy management in smart grid - an application survey
NASA Astrophysics Data System (ADS)
Naveen, P.; Kiing Ing, Wong; Kobina Danquah, Michael; Sidhu, Amandeep S.; Abu-Siada, Ahmed
2016-03-01
The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid.
Harnessing the Power of the Sun
NASA Technical Reports Server (NTRS)
2005-01-01
The Environmental Research Aircraft and Sensor Technology (ERAST) Alliance was created in 1994 and operated for 9 years as a NASA-sponsored coalition of 28 members from small companies, government, universities, and nonprofit organizations. ERAST s goal was to foster development of remotely piloted aircraft technology for scientific, humanitarian, and commercial purposes. Some of the aircraft in the ERAST Alliance were intended to fly unmanned at high altitudes for days at a time, and flying for such durations required alternative sources of power that did not add weight. The most successful solution for this type of sustained flight is the lightest solar energy. Photovoltaic cells convert sunlight directly into electricity. They are made of semi-conducting materials similar to those used in computer chips. When sunlight is absorbed, electrons are knocked loose from their atoms, allowing electricity to flow. Under the ERAST Alliance, two solar-powered technology demonstration aircraft, Pathfinder and Helios, were developed. Pathfinder is a lightweight, remotely piloted flying wing aircraft that demonstrated the technology of applying solar cells for long-duration, high-altitude flight. Solar arrays covering most of the upper wing surface provide power for the aircraft s electric motors, avionics, communications, and other electronic systems. Pathfinder also has a backup battery system that can provide power for between 2 and 5 hours to allow limited-duration flight after dark. It was designed, built, and operated by AeroVironment, Inc., of Monrovia, California. On September 11, 1995, Pathfinder reached an altitude of 50,500 feet, setting a new altitude record for solar-powered aircraft. The National Aeronautic Association presented the NASA-industry team with an award for 1 of the 10 Most Memorable Record Flights of 1995.
Solid-state Isotopic Power Source for Computer Memory Chips
NASA Technical Reports Server (NTRS)
Brown, Paul M.
1993-01-01
Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.
Towards quantum chemistry on a quantum computer.
Lanyon, B P; Whitfield, J D; Gillett, G G; Goggin, M E; Almeida, M P; Kassal, I; Biamonte, J D; Mohseni, M; Powell, B J; Barbieri, M; Aspuru-Guzik, A; White, A G
2010-02-01
Exact first-principles calculations of molecular properties are currently intractable because their computational cost grows exponentially with both the number of atoms and basis set size. A solution is to move to a radically different model of computing by building a quantum computer, which is a device that uses quantum systems themselves to store and process data. Here we report the application of the latest photonic quantum computer technology to calculate properties of the smallest molecular system: the hydrogen molecule in a minimal basis. We calculate the complete energy spectrum to 20 bits of precision and discuss how the technique can be expanded to solve large-scale chemical problems that lie beyond the reach of modern supercomputers. These results represent an early practical step toward a powerful tool with a broad range of quantum-chemical applications.
Exploration of operator method digital optical computers for application to NASA
NASA Technical Reports Server (NTRS)
1990-01-01
Digital optical computer design has been focused primarily towards parallel (single point-to-point interconnection) implementation. This architecture is compared to currently developing VHSIC systems. Using demonstrated multichannel acousto-optic devices, a figure of merit can be formulated. The focus is on a figure of merit termed Gate Interconnect Bandwidth Product (GIBP). Conventional parallel optical digital computer architecture demonstrates only marginal competitiveness at best when compared to projected semiconductor implements. Global, analog global, quasi-digital, and full digital interconnects are briefly examined as alternative to parallel digital computer architecture. Digital optical computing is becoming a very tough competitor to semiconductor technology since it can support a very high degree of three dimensional interconnect density and high degrees of Fan-In without capacitive loading effects at very low power consumption levels.
NASA Technical Reports Server (NTRS)
Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
With programs such as the US High Performance Computing and Communications Program (HPCCP), the attention of scientists and engineers worldwide has been focused on the potential of very high performance scientific computing, namely systems that are hundreds or thousands of times more powerful than those typically available in desktop systems at any given point in time. Extending the frontiers of computing in this manner has resulted in remarkable advances, both in computing technology itself and also in the various scientific and engineering disciplines that utilize these systems. Within the month or two, a sustained rate of 1 Tflop/s (also written 1 teraflops, or 10(exp 12) floating-point operations per second) is likely to be achieved by the 'ASCI Red' system at Sandia National Laboratory in New Mexico. With this objective in sight, it is reasonable to ask what lies ahead for high-end computing.
Impact of the proposed energy tax on nuclear electric generating technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edmunds, T.A.; Lamont, A.D.; Pasternak, A.D.
1993-05-01
The President`s new economic initiatives include an energy tax that will affect the costs of power from most electric generating technologies. The tax on nuclear power could be applied in a number of different ways at several different points in the fuel cycle. These different approaches could have different effects on the generation costs and benefits of advanced reactors. The Office of Nuclear Energy has developed models for assessing the costs and benefits of advanced reactor cycles which must be updated to take into account the impacts of the proposed tax. This report has been prepared to assess the spectrummore » of impacts of the energy tax on nuclear power and can be used in updating the Office`s economic models. This study was conducted in the following steps. First, the most authoritative statement of the proposed tax available at this time was obtained. Then the impacts of the proposed tax on the costs of nuclear and fossil fueled generation were compared. Finally several other possible approaches to taxing nuclear energy were evaluated. The cost impact on several advanced nuclear technologies and a current light water technology were computed. Finally, the rationale for the energy tax as applied to various electric generating methods was examined.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tugurlan, Maria C.; Kirkham, Harold; Chassin, David P.
Abstract Budget and schedule overruns in product development due to the use of immature technologies constitute an important matter for program managers. Moreover, unexpected lack of technology maturity is also a problem for buyers. Both sides of the situation would benefit from an unbiased measure of technology maturity. This paper presents the use of a software maturity metric called Technology Readiness Level (TRL), in the milieu of the smart grid. For most of the time they have been in existence, power utilities have been protected monopolies, guaranteed a return on investment on anything they could justify adding to the ratemore » base. Such a situation did not encourage innovation, and instead led to widespread risk-avoidance behavior in many utilities. The situation changed at the end of the last century, with a series of regulatory measures, beginning with the Public Utility Regulatory Policy Act of 1978. However, some bad experiences have actually served to strengthen the resistance to innovation by some utilities. Some aspects of the smart grid, such as the addition of computer-based control to the power system, face an uphill battle. It is our position that the addition of TRLs to the decision-making process for smart grid power-system projects, will lead to an environment of more confident adoption.« less
Study of basic computer competence among public health nurses in Taiwan.
Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling
2004-03-01
Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.
Market survey of fuel cells in Mexico: Niche for low power portable systems
NASA Astrophysics Data System (ADS)
Ramírez-Salgado, Joel; Domínguez-Aguilar, Marco A.
This work provides an overview of the potential market in Mexico for portable electronic devices to be potentially powered by direct methanol fuel cells. An extrapolation method based on data published in Mexico and abroad served to complete this market survey. A review of electronics consumption set the basis for the future forecast and technology assimilation. The potential market for fuel cells for mobile phones in Mexico will be around 5.5 billion USD by 2013, considering a cost of 41 USD per cell in a market of 135 million mobile phones. Likewise, the market for notebook computers, PDAs and other electronic devices will likely grow in the future, with a combined consumption of fuel cell technology equivalent to 1.6 billion USD by 2014.
[Stressor and stress reduction strategies for computer software engineers].
Asakura, Takashi
2002-07-01
First, in this article we discuss 10 significant occupational stressors for computer software engineers, based on the review of the scientific literature on their stress and mental health. The stressors include 1) quantitative work overload, 2) time pressure, 3) qualitative work load, 4) speed and diffusion of technological innovation, and technological divergence, 5) low discretional power, 6) underdeveloped career pattern, 7) low earnings/reward from jobs, 8) difficulties in managing a project team for software development and establishing support system, 9) difficulties in customer relations, and 10) personality characteristics. In addition, we delineate their working and organizational conditions that cause such occupational stressors in order to find strategies to reduce those stressors in their workplaces. Finally, we suggest three stressor and stress reduction strategies for software engineers.
NASA Technical Reports Server (NTRS)
Ray, R. J.; Hicks, J. W.; Alexander, R. I.
1988-01-01
The X-29A advanced technology demonstrator has shown the practicality and advantages of the capability to compute and display, in real time, aeroperformance flight results. This capability includes the calculation of the in-flight measured drag polar, lift curve, and aircraft specific excess power. From these elements many other types of aeroperformance measurements can be computed and analyzed. The technique can be used to give an immediate postmaneuver assessment of data quality and maneuver technique, thus increasing the productivity of a flight program. A key element of this new method was the concurrent development of a real-time in-flight net thrust algorithm, based on the simplified gross thrust method. This net thrust algorithm allows for the direct calculation of total aircraft drag.
Explosion safety in industrial electrostatics
NASA Astrophysics Data System (ADS)
Szabó, S. V.; Kiss, I.; Berta, I.
2011-01-01
Complicated industrial systems are often endangered by electrostatic hazards, both from atmospheric (lightning phenomenon, primary and secondary lightning protection) and industrial (technological problems caused by static charging and fire and explosion hazards.) According to the classical approach protective methods have to be used in order to remove electrostatic charging and to avoid damages, however no attempt to compute the risk before and after applying the protective method is made, relying instead on well-educated and practiced expertise. The Budapest School of Electrostatics - in close cooperation with industrial partners - develops new suitable solutions for probability based decision support (Static Control Up-to-date Technology, SCOUT) using soft computing methods. This new approach can be used to assess and audit existing systems and - using the predictive power of the models - to design and plan activities in industrial electrostatics.
A Battery-Aware Algorithm for Supporting Collaborative Applications
NASA Astrophysics Data System (ADS)
Rollins, Sami; Chang-Yit, Cheryl
Battery-powered devices such as laptops, cell phones, and MP3 players are becoming ubiquitous. There are several significant ways in which the ubiquity of battery-powered technology impacts the field of collaborative computing. First, applications such as collaborative data gathering, become possible. Also, existing applications that depend on collaborating devices to maintain the system infrastructure must be reconsidered. Fundamentally, the problem lies in the fact that collaborative applications often require end-user computing devices to perform tasks that happen in the background and are not directly advantageous to the user. In this work, we seek to better understand how laptop users use the batteries attached to their devices and analyze a battery-aware alternative to Gnutella’s ultrapeer selection algorithm. Our algorithm provides insight into how system maintenance tasks can be allocated to battery-powered nodes. The most significant result of our study indicates that a large portion of laptop users can participate in system maintenance without sacrificing any of their battery. These results show great promise for existing collaborative applications as well as new applications, such as collaborative data gathering, that rely upon battery-powered devices.
Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud
Florence, A. Paulin; Shanthi, V.; Simon, C. B. Sunil
2016-01-01
Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.
Florence, A Paulin; Shanthi, V; Simon, C B Sunil
2016-01-01
Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.
From Geocentrism to Allocentrism: Teaching the Phases of the Moon in a Digital Full-Dome Planetarium
ERIC Educational Resources Information Center
Chastenay, Pierre
2016-01-01
An increasing number of planetariums worldwide are turning digital, using ultra-fast computers, powerful graphic cards, and high-resolution video projectors to create highly realistic astronomical imagery in real time. This modern technology makes it so that the audience can observe astronomical phenomena from a geocentric as well as an…
Technical Assessment: Integrated Photonics
2015-10-01
in global internet protocol traffic as a function of time by local access technology. Photonics continues to play a critical role in enabling this...communication networks. This has enabled services like the internet , high performance computing, and power-efficient large-scale data centers. The...signal processing, quantum information science, and optics for free space applications. However major obstacles challenge the implementation of
Fidget with Widgets: CNC Activity Introduces the Flatbed Router
ERIC Educational Resources Information Center
Tryon, Daniel V.
2006-01-01
The computer numerical control (CNC) flatbed router is a powerful tool and a must-have piece of equipment for any technology education program in which students will produce a product--whether it involves Manufacturing, Materials Processing, or any of the vast array of Project Lead the Way courses. This article describes an activity--producing a…
ERIC Educational Resources Information Center
Ausburn, Lynna J.; Ausburn, Floyd B.
2004-01-01
Virtual Reality has been defined in many different ways and now means different things in various contexts. VR can range from simple environments presented on a desktop computer to fully immersive multisensory environments experienced through complex headgear and bodysuits. In all of its manifestations, VR is basically a way of simulating or…
ERIC Educational Resources Information Center
Schaffhauser, Dian
2009-01-01
Most smart technology leaders can name multiple efforts they have already taken or expect to pursue in their schools to "green up" IT operations, such as powering off idle computers and virtualizing the data center. One area that many of them may not be so savvy about, however, is hardware disposal: "What to do with the old stuff?" After all, it…
Bridging the Arts and Computer Science: Engaging At-Risk Students through the Integration of Music
ERIC Educational Resources Information Center
Moyer, Lisa; Klopfer, Michelle; Ernst, Jeremy V.
2018-01-01
Linux Laptop Orchestra (L2Ork), founded in 2009 in the Virginia Tech Music Department's Digital and Interactive Sound & Intermedia Studio, "explores the power of gesture, communal interaction, and the multidimensionality of arts, as well as technology's potential to seamlessly integrate arts and sciences with particular focus on K-12…
Heat Treatment Used to Strengthen Enabling Coating Technology for Oil-Free Turbomachinery
NASA Technical Reports Server (NTRS)
Edmonds, Brian J.; DellaCorte, Christopher
2002-01-01
The PS304 high-temperature solid lubricant coating is a key enabling technology for Oil- Free turbomachinery propulsion and power systems. Breakthroughs in the performance of advanced foil air bearings and improvements in computer-based finite element modeling techniques are the key technologies enabling the development of Oil-Free aircraft engines being pursued by the Oil-Free Turbomachinery team at the NASA Glenn Research Center. PS304 is a plasma spray coating applied to the surface of shafts operating against foil air bearings or in any other component requiring solid lubrication at high temperatures, where conventional materials such as graphite cannot function.
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Editor); Burnham, Calvin (Editor)
1995-01-01
The papers presented at the 4th International Conference Exhibition: World Congress on Superconductivity held at the Marriott Orlando World Center, Orlando, Florida, are contained in this document and encompass the research, technology, applications, funding, political, and social aspects of superconductivity. Specifically, the areas covered included: high-temperature materials; thin films; C-60 based superconductors; persistent magnetic fields and shielding; fabrication methodology; space applications; physical applications; performance characterization; device applications; weak link effects and flux motion; accelerator technology; superconductivity energy; storage; future research and development directions; medical applications; granular superconductors; wire fabrication technology; computer applications; technical and commercial challenges, and power and energy applications.
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Editor); Burnham, Calvin (Editor)
1995-01-01
This document contains papers presented at the 4th International Conference Exhibition: World Congress on Superconductivity held June 27-July 1, 1994 in Orlando, Florida. These documents encompass research, technology, applications, funding, political, and social aspects of superconductivity. The areas covered included: high-temperature materials; thin films; C-60 based superconductors; persistent magnetic fields and shielding; fabrication methodology; space applications; physical applications; performance characterization; device applications; weak link effects and flux motion; accelerator technology; superconductivity energy; storage; future research and development directions; medical applications; granular superconductors; wire fabrication technology; computer applications; technical and commercial challenges; and power and energy applications.
Creation of Power Reserves Under the Market Economy Conditions
NASA Astrophysics Data System (ADS)
Mahnitko, A.; Gerhards, J.; Lomane, T.; Ribakov, S.
2008-09-01
The main task of the control over an electric power system (EPS) is to ensure reliable power supply at the least cost. In this case, requirements to the electric power quality, power supply reliability and cost limitations on the energy resources must be observed. The available power reserve in an EPS is the necessary condition to keep it in operation with maintenance of normal operating variables (frequency, node voltage, power flows via the transmission lines, etc.). The authors examine possibilities to create power reserves that could be offered for sale by the electric power producer. They consider a procedure of price formation for the power reserves and propose a relevant mathematical model for a united EPS, the initial data being the fuel-cost functions for individual systems, technological limitations on the active power generation and consumers' load. As the criterion of optimization the maximum profit for the producer is taken. The model is exemplified by a concentrated EPS. The computations have been performed using the MATLAB program.
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.
NASA Technical Reports Server (NTRS)
Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric
2004-01-01
Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.
Distributed sensor networks: a cellular nonlinear network perspective.
Haenggi, Martin
2003-12-01
Large-scale networks of integrated wireless sensors become increasingly tractable. Advances in hardware technology and engineering design have led to dramatic reductions in size, power consumption, and cost for digital circuitry, and wireless communications. Networking, self-organization, and distributed operation are crucial ingredients to harness the sensing, computing, and computational capabilities of the nodes into a complete system. This article shows that those networks can be considered as cellular nonlinear networks (CNNs), and that their analysis and design may greatly benefit from the rich theoretical results available for CNNs.
Orthorectification by Using Gpgpu Method
NASA Astrophysics Data System (ADS)
Sahin, H.; Kulur, S.
2012-07-01
Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.
2004-01-01
Background As more and more information technology (IT) resources become available both for support of campus- based medical education and for Web-based learning, it becomes increasingly interesting to map the information technology resources available to medical students and the attitudes students have towards their use. Objective To determine how extensively and effectively information handling skills are being taught in the medical curriculum, the study investigated Internet and computer availability and usage, and attitudes towards information technology among first-year medical students in Aarhus, Denmark, during a five-year period. Methods In the period from 1998 to 2002, students beginning the first semester of medical school were given courses on effective use of IT in their studies. As a part of the tutorials, the students were asked to complete a web-based questionnaire which included questions related to IT readiness and attitudes towards using IT in studies. Results A total of 1159 students (78%) responded. Overall, 71.7% of the respondents indicating they had access to a computer at home, a number that did not change significantly during the study period. Over time, the power of students' computers and the use of e-mail and Internet did increase significantly. By fall 2002, approximately 90% of students used e-mail regularly, 80% used the Internet regularly, and 60% had access to the Internet from home. Significantly more males than females had access to a computer at home, and males had a more positive attitude towards the use of computers in their medical studies. A fairly constant number of students (3-7%) stated that they would prefer not to have to use computers in their studies. Conclusions Taken together with our experience from classroom teaching, these results indicate optional teaching of basic information technology still needs to be integrated into medical studies, and that this need does not seem likely to disappear in the near future. PMID:15111276
NASA Astrophysics Data System (ADS)
Sakaguchi, Hideharu
Do you remember an expert system? I think there are various impressions about the system. For example, some might say “It reminds me of old days”. On the other hand, some might say “It was really troublesome”. About 25 years ago, from late 1980s to the middle of 1990s, when the Showa era was about to change into the Heisei Era, artificial intelligence boomed. Research and development for an expert system which was equipped with expertise and worked as smart as expert, was advanced in various fields. Our company also picked up the system as the new system which covered weak point of conventional computer technology. We started research and development in 1984, and installed an expert system in a SCADA system, which started operating in March 1990 in the Fukuoka Integrated Control Center. In this essay, as an electric power engineer who involved in development at that time, I introduce the situation and travail story about developing an expert system which support restorative actions from the outage and overload condition of power networks.