Advanced laptop and small personal computer technology
NASA Technical Reports Server (NTRS)
Johnson, Roger L.
1991-01-01
Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Follen, Gregory J. (Technical Monitor); Radenski, Atanas
2003-01-01
The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
Meir, Arie; Rubinsky, Boris
2009-01-01
Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236
Meir, Arie; Rubinsky, Boris
2009-11-19
Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people.
NASA Advanced Supercomputing Facility Expansion
NASA Technical Reports Server (NTRS)
Thigpen, William W.
2017-01-01
The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.
Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing
NASA Technical Reports Server (NTRS)
Some, Raphael; Doyle, Richard; Bergman, Larry; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael
2013-01-01
Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and mission. Onboard computing can be aptly viewed as a "technology multiplier" in that advances provide direct dramatic improvements in flight functions and capabilities across the NASA mission classes, and enable new flight capabilities and mission scenarios, increasing science and exploration return. Space-qualified computing technology, however, has not advanced significantly in well over ten years and the current state of the practice fails to meet the near- to mid-term needs of NASA missions. Recognizing this gap, the NASA Game Changing Development Program (GCDP), under the auspices of the NASA Space Technology Mission Directorate, commissioned a study on space-based computing needs, looking out 15-20 years. The study resulted in a recommendation to pursue high-performance spaceflight computing (HPSC) for next-generation missions, and a decision to partner with the Air Force Research Lab (AFRL) in this development.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Radenski, Atanas; Follen, Gregory J. (Technical Monitor)
2001-01-01
The rapid growth of internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of new, internet-oriented software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high -performance computing applications community. The general goal of this research project is to contribute to better understanding of the transition to internet-based high -performance computing and to develop solutions for some of the difficulties of this transition. More specifically, our goal is to design an architecture for generic divide and conquer internet-based computing, to develop a portable implementation of this architecture, to create an example library of high-performance divide-and-conquer computing agents that run on top of this architecture, and to evaluate the performance of these agents. We have been designing an architecture that incorporates a master task-pool server and utilizes satellite computational servers that operate on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. Our designed architecture is intended to be complementary to and accessible from computational grids such as Globus, Legion, and Condor. Grids provide remote access to existing high-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end internet nodes. Our project is focused on a generic divide-and-conquer paradigm and its applications that operate on a loose and ever changing pool of lower-end internet nodes.
High End Computer Network Testbedding at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Gary, James Patrick
1998-01-01
The Earth & Space Data Computing (ESDC) Division, at the Goddard Space Flight Center, is involved in development and demonstrating various high end computer networking capabilities. The ESDC has several high end super computers. These are used to run: (1) computer simulation of the climate systems; (2) to support the Earth and Space Sciences (ESS) project; (3) to support the Grand Challenge (GC) Science, which is aimed at understanding the turbulent convection and dynamos in stars. GC research occurs in many sites throughout the country, and this research is enabled by, in part, the multiple high performance network interconnections. The application drivers for High End Computer Networking use distributed supercomputing to support virtual reality applications, such as TerraVision, (i.e., three dimensional browser of remotely accessed data), and Cave Automatic Virtual Environments (CAVE). Workstations can access and display data from multiple CAVE's with video servers, which allows for group/project collaborations using a combination of video, data, voice and shared white boarding. The ESDC is also developing and demonstrating the high degree of interoperability between satellite and terrestrial-based networks. To this end, the ESDC is conducting research and evaluations of new computer networking protocols and related technologies which improve the interoperability of satellite and terrestrial networks. The ESDC is also involved in the Security Proof of Concept Keystone (SPOCK) program sponsored by National Security Agency (NSA). The SPOCK activity provides a forum for government users and security technology providers to share information on security requirements, emerging technologies and new product developments. Also, the ESDC is involved in the Trans-Pacific Digital Library Experiment, which aims to demonstrate and evaluate the use of high performance satellite communications and advanced data communications protocols to enable interactive digital library data access between the U. S. Library of Congress, the National Library of Japan and other digital library sites at 155 MegaBytes Per Second. The ESDC participation in this program is the Trans-Pacific access to GLOBE visualizations in real time. ESDC is participating in the Department of Defense's ATDNet with Multiwavelength Optical Network (MONET) a fully switched Wavelength Division Networking testbed. This presentation is in viewgraph format.
Partners | Energy Systems Integration Facility | NREL
Renewable Electricity to Grid Integration Evaluation of New Technology IGBT Industry Asetek High Performance Energy Commission High Performance Computing & Visualization Real-Time Data Collection for Institute/Schneider Electric Renewable Electricity to Grid Integration End-to-End Communication and Control
ERIC Educational Resources Information Center
Izadpanah, Siros; Alavi, Mansooreh
2016-01-01
Recent developments in the field of computer technology have led to a renewed interest in the process of learning. In order to investigate EFL learners' perception of technology use, a mixed method design was used to explore students' attitude. Quantitative data was collected through questionnaires and qualitative data using open-ended questions.…
Hagland, Mark
2010-03-01
CIOs must ensure the creation of a technology foundation underlying the implementation of new applications, in order to guarantee continuous computing and other essential characteristics of IT service for end-users, going forward. Focusing on the needs of end-users will be essential to creating that foundation. End-user expectations are already outstripping technological capabilities, putting pressure on CIOs to carefully balance the offering of highly desired applications with the creation of a strong tech foundation to undergird those apps.
Managing End User Computing in the Federal Government.
ERIC Educational Resources Information Center
General Services Administration, Washington, DC.
This report presents an initial approach developed by the General Services Administration for the management of end user computing in federal government agencies. Defined as technology used directly by individuals in need of information products, end user computing represents a new field encompassing such technologies as word processing, personal…
High Productivity Computing Systems and Competitiveness Initiative
2007-07-01
planning committee for the annual, international Supercomputing Conference in 2004 and 2005. This is the leading HPC industry conference in the world. It...sector partnerships. Partnerships will form a key part of discussions at the 2nd High Performance Computing Users Conference, planned for July 13, 2005...other things an interagency roadmap for high-end computing core technologies and an accessibility improvement plan . Improving HPC Education and
2017-04-01
The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.
Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing
NASA Technical Reports Server (NTRS)
Doyle, Richard; Bergman, Larry; Some, Raphael; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael
2013-01-01
Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and the mission; it can be aptly viewed as a "technology multiplier" in that advances in onboard computing provide dramatic improvements in flight functions and capabilities across the NASA mission classes, and will enable new flight capabilities and mission scenarios, increasing science and exploration return per mission-dollar.
Radiology: "killer app" for next generation networks?
McNeill, Kevin M
2004-03-01
The core principles of digital radiology were well developed by the end of the 1980 s. During the following decade tremendous improvements in computer technology enabled realization of those principles at an affordable cost. In this decade work can focus on highly distributed radiology in the context of the integrated health care enterprise. Over the same period computer networking has evolved from a relatively obscure field used by a small number of researchers across low-speed serial links to a pervasive technology that affects nearly all facets of society. Development directions in network technology will ultimately provide end-to-end data paths with speeds that match or exceed the speeds of data paths within the local network and even within workstations. This article describes key developments in Next Generation Networks, potential obstacles, and scenarios in which digital radiology can become a "killer app" that helps to drive deployment of new network infrastructure.
The DYNES Instrument: A Description and Overview
NASA Astrophysics Data System (ADS)
Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi
2012-12-01
Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.
A Survey of Electronic Color Printer Technologies
NASA Astrophysics Data System (ADS)
Starkweather, Gary K.
1989-08-01
Electronic printing in black and white has now come of age. Both high and low speed laser printers now heavily populate the electronic printing marketplace. On the high end of the market, the Xerox 9700 printer is the market dominator while the Canon LBP-SX and CX engines dominate the low end of the market. Clearly, laser printers are the predominant monochrome electronic printing technology. Ink jet is now beginning to engage the low end printer market but still fails to attain laser printer image quality. As yet, ink jet is not a serious contender for the substantial low end laser printer marketplace served by Apple Computer's LaserWriter II and Hewlett-Packard's LaserJet printers. Laser printing generally dominates because of its cost/performance as well as the reliability of the cartridge serviced low end printers.
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
2017-03-23
performance computing resources made available by the US Department of Defense High Performance Computing Modernization Program at the Air Force...1Department of Defense Biotechnology High Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, United...States Army Medical Research and Materiel Command, Fort Detrick, Maryland, USA Full list of author information is available at the end of the article
ERIC Educational Resources Information Center
Michelson, Avra; Rothenberg, Jeff
1993-01-01
The report considers the interaction of trends in information technology and trends in research practices and the policy implications for archives. The information is divided into 4 sections. The first section, an "Overview of Information Technology Trends," discusses end-user computing, which includes ubiquitous computing, end-user…
1981-03-12
agriculture, raw materials, energy sources, computers, lasers , space and aeronautics, high energy physics, and genetics. The four modernizations will be...accomp- lished and the strong socialist country that is born at the end of the century will be a keyhole for the promotion of science and technology...Process (FNP). Its purpose is to connect with the Kiautsu University computer (model 108) and then to connect a data terminal . This will make a
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
NASA HPCC Technology for Aerospace Analysis and Design
NASA Technical Reports Server (NTRS)
Schulbach, Catherine H.
1999-01-01
The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.
Use of cloud computing in biomedicine.
Sobeslav, Vladimir; Maresova, Petra; Krejcar, Ondrej; Franca, Tanos C C; Kuca, Kamil
2016-12-01
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.
USDA-ARS?s Scientific Manuscript database
Remarkable advances in next-generation sequencing (NGS) technologies, bioinformatics algorithms, and computational technologies have significantly accelerated genomic research. However, complicated NGS data analysis still remains as a major bottleneck. RNA-seq, as one of the major area in the NGS fi...
Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804
Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.
Developing Open Source Software To Advance High End Computing. Report to the President.
ERIC Educational Resources Information Center
National Coordination Office for Information Technology Research and Development, Arlington, VA.
This is part of a series of reports to the President and Congress developed by the President's Information Technology Advisory Committee (PITAC) on key contemporary issues in information technology. This report defines open source software, explains PITAC's interest in this model, describes the process used to investigate issues in open source…
A Look at the Impact of High-End Computing Technologies on NASA Missions
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart
2012-01-01
From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.
ERIC Educational Resources Information Center
Karamete, Aysen
2015-01-01
This study aims to show the present conditions about the usage of cloud computing in the department of Computer Education and Instructional Technology (CEIT) amongst teacher trainees in School of Necatibey Education, Balikesir University, Turkey. In this study, a questionnaire with open-ended questions was used. 17 CEIT teacher trainees…
The grand challenge of managing the petascale facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R. J.; Mathematics and Computer Science
2007-02-28
This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less
Pen-based computers: Computers without keys
NASA Technical Reports Server (NTRS)
Conklin, Cheryl L.
1994-01-01
The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.
Fault Tolerance for VLSI Multicomputers
1985-08-01
that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error
A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm
Chen, Jui-Le; Yang, Chu-Sing
2013-01-01
The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... programmability. (ii) Technology and source code. Technology and source code eligible for License Exception APP..., reexports and transfers (in-country) for nuclear, chemical, biological, or missile end-users and end-uses...
NASA Technical Reports Server (NTRS)
Schulbach, Catherine H. (Editor)
2000-01-01
The purpose of the CAS workshop is to bring together NASA's scientists and engineers and their counterparts in industry, other government agencies, and academia working in the Computational Aerosciences and related fields. This workshop is part of the technology transfer plan of the NASA High Performance Computing and Communications (HPCC) Program. Specific objectives of the CAS workshop are to: (1) communicate the goals and objectives of HPCC and CAS, (2) promote and disseminate CAS technology within the appropriate technical communities, including NASA, industry, academia, and other government labs, (3) help promote synergy among CAS and other HPCC scientists, and (4) permit feedback from peer researchers on issues facing High Performance Computing in general and the CAS project in particular. This year we had a number of exciting presentations in the traditional aeronautics, aerospace sciences, and high-end computing areas and in the less familiar (to many of us affiliated with CAS) earth science, space science, and revolutionary computing areas. Presentations of more than 40 high quality papers were organized into ten sessions and presented over the three-day workshop. The proceedings are organized here for easy access: by author, title and topic.
HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation
NASA Technical Reports Server (NTRS)
Sterling, Thomas; Bergman, Larry
2000-01-01
Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.
MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank Mueller
2009-02-05
MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less
Mobile Computing for Aerospace Applications
NASA Technical Reports Server (NTRS)
Alena, Richard; Swietek, Gregory E. (Technical Monitor)
1994-01-01
The use of commercial computer technology in specific aerospace mission applications can reduce the cost and project cycle time required for the development of special-purpose computer systems. Additionally, the pace of technological innovation in the commercial market has made new computer capabilities available for demonstrations and flight tests. Three areas of research and development being explored by the Portable Computer Technology Project at NASA Ames Research Center are the application of commercial client/server network computing solutions to crew support and payload operations, the analysis of requirements for portable computing devices, and testing of wireless data communication links as extensions to the wired network. This paper will present computer architectural solutions to portable workstation design including the use of standard interfaces, advanced flat-panel displays and network configurations incorporating both wired and wireless transmission media. It will describe the design tradeoffs used in selecting high-performance processors and memories, interfaces for communication and peripheral control, and high resolution displays. The packaging issues for safe and reliable operation aboard spacecraft and aircraft are presented. The current status of wireless data links for portable computers is discussed from a system design perspective. An end-to-end data flow model for payload science operations from the experiment flight rack to the principal investigator is analyzed using capabilities provided by the new generation of computer products. A future flight experiment on-board the Russian MIR space station will be described in detail including system configuration and function, the characteristics of the spacecraft operating environment, the flight qualification measures needed for safety review, and the specifications of the computing devices to be used in the experiment. The software architecture chosen shall be presented. An analysis of the performance characteristics of wireless data links in the spacecraft environment will be discussed. Network performance and operation will be modeled and preliminary test results presented. A crew support application will be demonstrated in conjunction with the network metrics experiment.
ERIC Educational Resources Information Center
Manzano, Sancho J., Jr.
2012-01-01
Empirical studies have been conducted on what is known as end-user computing from as early as the 1980s to present-day IT employees. There have been many studies on using quantitative instruments by Cotterman and Kumar (1989) and Rockart and Flannery (1983). Qualitative studies on end-user computing classifications have been conducted by…
Migrating Educational Data and Services to Cloud Computing: Exploring Benefits and Challenges
ERIC Educational Resources Information Center
Lahiri, Minakshi; Moseley, James L.
2013-01-01
"Cloud computing" is currently the "buzzword" in the Information Technology field. Cloud computing facilitates convenient access to information and software resources as well as easy storage and sharing of files and data, without the end users being aware of the details of the computing technology behind the process. This…
Deep Space Network information system architecture study
NASA Technical Reports Server (NTRS)
Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.
1992-01-01
The purpose of this article is to describe an architecture for the DSN information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990's. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies--i.e., computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.
A case study for cloud based high throughput analysis of NGS data using the globus genomics system
Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...
2015-01-01
Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less
A case study for cloud based high throughput analysis of NGS data using the globus genomics system
Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha
2014-01-01
Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205
Cloud@Home: A New Enhanced Computing Paradigm
NASA Astrophysics Data System (ADS)
Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco
Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).
Effects of Regulation and Technology on End Uses of Nonfuel Mineral Commodities in the United States
Matos, Grecia R.
2007-01-01
The regulatory system and advancement of technologies have shaped the end-use patterns of nonfuel minerals used in the United States. These factors affected the quantities and types of materials used by society. Environmental concerns and awareness of possible negative effects on public health prompted numerous regulations that have dramatically altered the use of commodities like arsenic, asbestos, lead, and mercury. While the selected commodities represent only a small portion of overall U.S. materials use, they have the potential for harmful effects on human health or the environment, which other commodities, like construction aggregates, do not normally have. The advancement of technology allowed for new uses of mineral materials in products like high-performance computers, telecommunications equipment, plasma and liquid-crystal display televisions and computer monitors, mobile telephones, and electronic devices, which have become mainstream products. These technologies altered the end-use pattern of mineral commodities like gallium, germanium, indium, and strontium. Human ingenuity and people?s demand for different and creative services increase the demand for new materials and industries while shifting the pattern of use of mineral commodities. The mineral commodities? end-use data are critical for the understanding of the magnitude and character of these flows, assessing their impact on the environment, and providing an early warning of potential problems in waste management of products containing these commodities. The knowledge of final disposition of the mineral commodity allows better decisions as to how regulation should be tailored.
Halder, S; Käthner, I; Kübler, A
2016-02-01
Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
With programs such as the US High Performance Computing and Communications Program (HPCCP), the attention of scientists and engineers worldwide has been focused on the potential of very high performance scientific computing, namely systems that are hundreds or thousands of times more powerful than those typically available in desktop systems at any given point in time. Extending the frontiers of computing in this manner has resulted in remarkable advances, both in computing technology itself and also in the various scientific and engineering disciplines that utilize these systems. Within the month or two, a sustained rate of 1 Tflop/s (also written 1 teraflops, or 10(exp 12) floating-point operations per second) is likely to be achieved by the 'ASCI Red' system at Sandia National Laboratory in New Mexico. With this objective in sight, it is reasonable to ask what lies ahead for high-end computing.
2009 High Performance Computing Modernization Program Users Group Conference
2009-06-17
Asymmetric Threats Future Peer GWoT / ungoverned areas Irregular Warfare Low-end Asymmetric 1-4-2-1 (State-to-State War) Disruptive technologies Superiority...2008 “As changes in this century’s threat environment create strategic challenges – irregular warfare, weapons of mass destruction, disruptive ... technologies – this request places greater emphasis on basic research, which in recent years has not kept pace with other parts of the budget.” • Personnel
ERIC Educational Resources Information Center
Marsh, Cecille
A two-phase study examined the skills required of competent end-users of computers in the workplace and assessed the computing awareness and technological environment of first-year students entering historically disadvantaged technikons in South Africa. First, a DACUM (Developing a Curriculum) panel of nine representatives of local business and…
ERIC Educational Resources Information Center
McConnell, Pamela Jean
1993-01-01
This third in a series of articles on EDIS (Electronic Document Imaging System) technology focuses on organizational issues. Highlights include computer platforms; management information systems; computer-based skills of staff; new technology and change; time factors; financial considerations; document conversion costs; the benefits of EDIS…
Magnesium Front End Research and Development: A Canada-China-USA Collaboration
NASA Astrophysics Data System (ADS)
Luo, Alan A.; Nyberg, Eric A.; Sadayappan, Kumar; Shi, Wenfang
The Magnesium Front End Research & Development (MFERD) project is an effort jointly sponsored by the United States Department of Energy, the United States Automotive Materials Partnership (USAMP), the Chinese Ministry of Science and Technology and Natural Resources Canada (NRCan) to demonstrate the technical and economic feasibility of a magnesium-intensive automotive front end body structure which offers improved fuel economy and performance benefits in a multi-material automotive structure. The project examines novel magnesium automotive body applications and processes, beyond conventional die castings, including wrought components (sheet or extrusions) and high-integrity body castings. This paper outlines the scope of work and organization for the collaborative (tri-country) task teams. The project has the goals of developing key enabling technologies and knowledge base for increased magnesium automotive body applications. The MFERD project began in early 2007 by initiating R&D in the following areas: crashworthiness, NVH, fatigue and durability, corrosion and surface finishing, extrusion and forming, sheet and forming, high-integrity body casting, as well as joining and assembly. Additionally, the MFERD project is also linked to the Integrated Computational Materials Engineering (ICME) project that will investigate the processing/structure/properties relations for various magnesium alloys and manufacturing processes utilizing advanced computer-aided engineering and modeling tools.
VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory
NASA Astrophysics Data System (ADS)
Škoda, Petr; Hadrava, Petr; Fuchs, Jan
2012-04-01
VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.
Dallas County Community College District.
ERIC Educational Resources Information Center
Rudy, Julia
1989-01-01
Management of information technology at Dallas County Community College District is centralized. Information technology organization and planning, integrated data network, computer services, end user services, and educational technology are discussed. (MLW)
Deep Space Network information system architecture study
NASA Technical Reports Server (NTRS)
Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.
1992-01-01
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.
End-to-end plasma bubble PIC simulations on GPUs
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava
2017-10-01
Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.
Bringing education to your virtual doorstep
NASA Astrophysics Data System (ADS)
Kaurov, Vitaliy
2013-03-01
We currently witness significant migration of academic resources towards online CMS, social networking, and high-end computerized education. This happens for traditional academic programs as well as for outreach initiatives. The talk will go over a set of innovative integrated technologies, many of which are free. These were developed by Wolfram Research in order to facilitate and enhance the learning process in mathematical and physical sciences. Topics include: cloud computing with Mathematica Online; natural language programming; interactive educational resources and web publishing at the Wolfram Demonstrations Project; the computational knowledge engine Wolfram Alpha; Computable Document Format (CDF) and self-publishing with interactive e-books; course assistant apps for mobile platforms. We will also discuss outreach programs where such technologies are extensively used, such as the Wolfram Science Summer School and the Mathematica Summer Camp.
The science of visual analysis at extreme scale
NASA Astrophysics Data System (ADS)
Nowell, Lucy T.
2011-01-01
Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.
Computing, information, and communications: Technologies for the 21. Century
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-11-01
To meet the challenges of a radically new and technologically demanding century, the Federal Computing, Information, and Communications (CIC) programs are investing in long-term research and development (R and D) to advance computing, information, and communications in the United States. CIC R and D programs help Federal departments and agencies to fulfill their evolving missions, assure the long-term national security, better understand and manage the physical environment, improve health care, help improve the teaching of children, provide tools for lifelong training and distance learning to the workforce, and sustain critical US economic competitiveness. One of the nine committees of themore » National Science and Technology Council (NSTC), the Committee on Computing, Information, and Communications (CCIC)--through its CIC R and D Subcommittee--coordinates R and D programs conducted by twelve Federal departments and agencies in cooperation with US academia and industry. These R and D programs are organized into five Program Component Areas: (1) HECC--High End Computing and Computation; (2) LSN--Large Scale Networking, including the Next Generation Internet Initiative; (3) HCS--High Confidence Systems; (4) HuCS--Human Centered Systems; and (5) ETHR--Education, Training, and Human Resources. A brief synopsis of FY 1997 accomplishments and FY 1998 goals by PCA is presented. This report, which supplements the President`s Fiscal Year 1998 Budget, describes the interagency CIC programs.« less
The Computer's Debt to Science.
ERIC Educational Resources Information Center
Branscomb, Lewis M.
1984-01-01
Discusses discoveries and applications of science that have enabled the computer industry to introduce new technology each year and produce 25 percent more for the customer at constant cost. Potential limits to progress, disc storage technology, programming and end-user interface, and designing for ease of use are considered. Glossary is included.…
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
What about the Firewall? Creating Virtual Worlds in a Public Primary School Using Sim-on-a-Stick
ERIC Educational Resources Information Center
Jacka, Lisa; Booth, Kate
2012-01-01
Virtual worlds are highly immersive, engaging and popular computer mediated environments being explored by children and adults. Why then aren't more teachers using virtual worlds in the classroom with primary and secondary school students? Reasons often cited are the learning required to master the technology, low-end graphics cards, poor…
Advanced Collaborative Environments Supporting Systems Integration and Design
2003-03-01
concurrently view a virtual system or product model while maintaining natural, human communication . These virtual systems operate within a computer-generated...These environments allow multiple individuals to concurrently view a virtual system or product model while simultaneously maintaining natural, human ... communication . As a result, TARDEC researchers and system developers are using this advanced high-end visualization technology to develop future
Silva-Lopes, Victor W; Monteiro-Leal, Luiz H
2003-07-01
The development of new technology and the possibility of fast information delivery by either Internet or Intranet connections are changing education. Microanatomy education depends basically on the correct interpretation of microscopy images by students. Modern microscopes coupled to computers enable the presentation of these images in a digital form by creating image databases. However, the access to this new technology is restricted entirely to those living in cities and towns with an Information Technology (IT) infrastructure. This study describes the creation of a free Internet histology database composed by high-quality images and also presents an inexpensive way to supply it to a greater number of students through Internet/Intranet connections. By using state-of-the-art scientific instruments, we developed a Web page (http://www2.uerj.br/~micron/atlas/atlasenglish/index.htm) that, in association with a multimedia microscopy laboratory, intends to help in the reduction of the IT educational gap between developed and underdeveloped regions. Copyright 2003 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Johnson, M.; Label, K.; McCabe, J.; Powell, W.; Bolotin, G.; Kolawa, E.; Ng, T.; Hyde, D.
2007-01-01
Implementation of challenging Exploration Systems Missions Directorate objectives and strategies can be constrained by onboard computing capabilities and power efficiencies. The Radiation Hardened Electronics for Space Environments (RHESE) High Performance Processors for Space Environments project will address this challenge by significantly advancing the sustained throughput and processing efficiency of high-per$ormance radiation-hardened processors, targeting delivery of products by the end of FY12.
SCALING AN URBAN EMERGENCY EVACUATION FRAMEWORK: CHALLENGES AND PRACTICES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthik, Rajasekar; Lu, Wei
2014-01-01
Critical infrastructure disruption, caused by severe weather events, natural disasters, terrorist attacks, etc., has significant impacts on urban transportation systems. We built a computational framework to simulate urban transportation systems under critical infrastructure disruption in order to aid real-time emergency evacuation. This framework will use large scale datasets to provide a scalable tool for emergency planning and management. Our framework, World-Wide Emergency Evacuation (WWEE), integrates population distribution and urban infrastructure networks to model travel demand in emergency situations at global level. Also, a computational model of agent-based traffic simulation is used to provide an optimal evacuation plan for traffic operationmore » purpose [1]. In addition, our framework provides a web-based high resolution visualization tool for emergency evacuation modelers and practitioners. We have successfully tested our framework with scenarios in both United States (Alexandria, VA) and Europe (Berlin, Germany) [2]. However, there are still some major drawbacks for scaling this framework to handle big data workloads in real time. On our back-end, lack of proper infrastructure limits us in ability to process large amounts of data, run the simulation efficiently and quickly, and provide fast retrieval and serving of data. On the front-end, the visualization performance of microscopic evacuation results is still not efficient enough due to high volume data communication between server and client. We are addressing these drawbacks by using cloud computing and next-generation web technologies, namely Node.js, NoSQL, WebGL, Open Layers 3 and HTML5 technologies. We will describe briefly about each one and how we are using and leveraging these technologies to provide an efficient tool for emergency management organizations. Our early experimentation demonstrates that using above technologies is a promising approach to build a scalable and high performance urban emergency evacuation framework that can improve traffic mobility and safety under critical infrastructure disruption in today s socially connected world.« less
High-power graphic computers for visual simulation: a real-time--rendering revolution
NASA Technical Reports Server (NTRS)
Kaiser, M. K.
1996-01-01
Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.
Influence of technology on magnetic tape storage device characteristics
NASA Technical Reports Server (NTRS)
Gniewek, John J.; Vogel, Stephen M.
1994-01-01
There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Benny N.
2000-01-01
There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA-based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.
ERIC Educational Resources Information Center
Mobray, Deborah, Ed.
Papers on local area networks (LANs), modelling techniques, software improvement, capacity planning, software engineering, microcomputers and end user computing, cost accounting and chargeback, configuration and performance management, and benchmarking presented at this conference include: (1) "Theoretical Performance Analysis of Virtual…
Active and passive computed tomography mixed waste focus area final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, G P
1998-08-19
The Mixed Waste Focus Area (MWFA) Characterization Development Strategy delineates an approach to resolve technology deficiencies associated with the characterization of mixed wastes. The intent of this strategy is to ensure the availability of technologies to support the Department of Energy's (DOE) mixed waste low-level or transuranic (TRU) contaminated waste characterization management needs. To this end the MWFA has defined and coordinated characterization development programs to ensure that data and test results necessary to evaluate the utility of non-destructive assay technologies are available to meet site contact handled waste management schedules. Requirements used as technology development project benchmarks are basedmore » in the National TRU Program Quality Assurance Program Plan. These requirements include the ability to determine total bias and total measurement uncertainty. These parameters must be completely evaluated for waste types to be processed through a given nondestructive waste assay system constituting the foundation of activities undertaken in technology development projects. Once development and testing activities have been completed, Innovative Technology Summary Reports are generated to provide results and conclusions to support EM-30, -40, or -60 end user/customer technology selection. The Active and Passive Computed Tomography non-destructive assay system is one of the technologies selected for development by the MWFA. Lawrence Livermore National Laboratory's (LLNL) is developing the Active and Passive Computed Tomography (A&PCT) nondestructive assay (NDA) technology to identify and accurately quantify all detectable radioisotopes in closed containers of waste. This technology will be applicable to all types of waste regardless of .their classification; low level, transuranic or provide results and conclusions to support EM-30, -40, or -60 end user/customer technology selection. The Active and Passive Computed Tomography non-destructive assay system is one of the technologies selected for development by the MWFA. Lawrence Livermore National Laboratory's (LLNL) is developing the Active and Passive Computed Tomography (A&PCT) nondestructive assay (NDA) technology to identify and accurately quantify all detectable radioisotopes in closed containers of waste. This technology will be applicable to all types of waste regardless of .their classification; low level, transuranic or mixed, which contains radioactivity and hazardous organic species. The scope of our technology is to develop a non-invasive waste-drum scanner that employs the principles of computed tomography and gamma-ray spectral analysis to identify and quantify all of the detectable radioisotopes. Once this and other applicable technologies are developed, waste drums can be non- destructively and accurately characterized to satisfy repository and regulatory guidelines prior to disposal.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Citterio, M.; Camplani, A.; Cannon, M.
SRAM based Field Programmable Gate Arrays (FPGAs) have been rarely used in High Energy Physics (HEP) due to their sensitivity to radiation. The last generation of commercial FPGAs based on 28 nm feature size and on Silicon On Insulator (SOI) technologies are more tolerant to radiation to the level that their use in front-end electronics is now feasible. FPGAs provide re-programmability, high-speed computation and fast data transmission through the embedded serial transceivers. They could replace custom application specific integrated circuits in front end electronics in locations with moderate radiation field. Finally, the use of a FPGA in HEP experiments ismore » only limited by our ability to mitigate single event effects induced by the high energy hadrons present in the radiation field.« less
NASA Technical Reports Server (NTRS)
1997-01-01
In 1990, Avtec Systems, Inc. developed its first telemetry boards for Goddard Space Flight Center. Avtec products now include PC/AT, PCI and VME-based high speed I/O boards and turn-key systems. The most recent and most successful technology transfer from NASA to Avtec is the Programmable Telemetry Processor (PTP), a personal computer- based, multi-channel telemetry front-end processing system originally developed to support the NASA communication (NASCOM) network. The PTP performs data acquisition, real-time network transfer, and store and forward operations. There are over 100 PTP systems located in NASA facilities and throughout the world.
Understanding and enhancing user acceptance of computer technology
NASA Technical Reports Server (NTRS)
Rouse, William B.; Morris, Nancy M.
1986-01-01
Technology-driven efforts to implement computer technology often encounter problems due to lack of acceptance or begrudging acceptance of the personnel involved. It is argued that individuals' acceptance of automation, in terms of either computerization or computer aiding, is heavily influenced by their perceptions of the impact of the automation on their discretion in performing their jobs. It is suggested that desired levels of discretion reflect needs to feel in control and achieve self-satisfaction in task performance, as well as perceptions of inadequacies of computer technology. Discussion of these factors leads to a structured set of considerations for performing front-end analysis, deciding what to automate, and implementing the resulting changes.
Fully-Coupled Thermo-Electrical Modeling and Simulation of Transition Metal Oxide Memristors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamaluy, Denis; Gao, Xujiao; Tierney, Brian David
2016-11-01
Transition metal oxide (TMO) memristors have recently attracted special attention from the semiconductor industry and academia. Memristors are one of the strongest candidates to replace flash memory, and possibly DRAM and SRAM in the near future. Moreover, memristors have a high potential to enable beyond-CMOS technology advances in novel architectures for high performance computing (HPC). The utility of memristors has been demonstrated in reprogrammable logic (cross-bar switches), brain-inspired computing and in non-CMOS complementary logic. Indeed, the potential use of memristors as logic devices is especially important considering the inevitable end of CMOS technology scaling that is anticipated by 2025. Inmore » order to aid the on-going Sandia memristor fabrication effort with a memristor design tool and establish a clear physical picture of resistance switching in TMO memristors, we have created and validated with experimental data a simulation tool we name the Memristor Charge Transport (MCT) Simulator.« less
Integrated modeling of advanced optical systems
NASA Astrophysics Data System (ADS)
Briggs, Hugh C.; Needels, Laura; Levine, B. Martin
1993-02-01
This poster session paper describes an integrated modeling and analysis capability being developed at JPL under funding provided by the JPL Director's Discretionary Fund and the JPL Control/Structure Interaction Program (CSI). The posters briefly summarize the program capabilities and illustrate them with an example problem. The computer programs developed under this effort will provide an unprecedented capability for integrated modeling and design of high performance optical spacecraft. The engineering disciplines supported include structural dynamics, controls, optics and thermodynamics. Such tools are needed in order to evaluate the end-to-end system performance of spacecraft such as OSI, POINTS, and SMMM. This paper illustrates the proof-of-concept tools that have been developed to establish the technology requirements and demonstrate the new features of integrated modeling and design. The current program also includes implementation of a prototype tool based upon the CAESY environment being developed under the NASA Guidance and Control Research and Technology Computational Controls Program. This prototype will be available late in FY-92. The development plan proposes a major software production effort to fabricate, deliver, support and maintain a national-class tool from FY-93 through FY-95.
Integrating Information Technologies Into Large Organizations
NASA Technical Reports Server (NTRS)
Gottlich, Gretchen; Meyer, John M.; Nelson, Michael L.; Bianco, David J.
1997-01-01
NASA Langley Research Center's product is aerospace research information. To this end, Langley uses information technology tools in three distinct ways. First, information technology tools are used in the production of information via computation, analysis, data collection and reduction. Second, information technology tools assist in streamlining business processes, particularly those that are primarily communication based. By applying these information tools to administrative activities, Langley spends fewer resources on managing itself and can allocate more resources for research. Third, Langley uses information technology tools to disseminate its aerospace research information, resulting in faster turn around time from the laboratory to the end-customer.
EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.
Basic principles of cone beam computed tomography.
Abramovitch, Kenneth; Rice, Dwight D
2014-07-01
At the end of the millennium, cone-beam computed tomography (CBCT) heralded a new dental technology for the next century. Owing to the dramatic and positive impact of CBCT on implant dentistry and orthognathic/orthodontic patient care, additional applications for this technology soon evolved. New software programs were developed to improve the applicability of, and access to, CBCT for dental patients. Improved, rapid, and cost-effective computer technology, combined with the ability of software engineers to develop multiple dental imaging applications for CBCT with broad diagnostic capability, have played a large part in the rapid incorporation of CBCT technology into dentistry. Copyright © 2014 Elsevier Inc. All rights reserved.
VCSEL-based optical transceiver module for high-speed short-reach interconnect
NASA Astrophysics Data System (ADS)
Yagisawa, Takatoshi; Oku, Hideki; Mori, Tatsuhiro; Tsudome, Rie; Tanaka, Kazuhiro; Daikuhara, Osamu; Komiyama, Takeshi; Ide, Satoshi
2017-02-01
Interconnects have been more important in high-performance computing systems and high-end servers beside its improvements in computing capability. Recently, active optical cables (AOCs) have started being used for this purpose instead of conventionally used copper cables. The AOC enables to extend the transmission distance of the high-speed signals dramatically by its broadband characteristics, however, it tend to increase the cost. In this paper, we report our developed quad small form-factor pluggable (QSFP) AOC utilizing cost-effective optical-module technologies. These are a unique structure using generally used flexible printed circuit (FPC) in combination with an optical waveguide that enables low-cost high-precision assembly with passive alignment, a lens-integrated ferrule that improves productivity by eliminating a polishing process for physical contact of standard PMT connector for the optical waveguide, and an overdrive technology that enables 100 Gb/s (25 Gb/s × 4-channel) operation with low-cost 14 Gb/s vertical-cavity surfaceemitting laser (VCSEL) array. The QSFP AOC demonstrated clear eye opening and error-free operation at 100 Gb/s with high yield rate even though the 14 Gb/s VCSEL was used thanks to the low-coupling loss resulting from the highprecision alignment of optical devices and the over-drive technology.
An overview of recent end-to-end wireless medical video telemedicine systems using 3G.
Panayides, A; Pattichis, M S; Pattichis, C S; Schizas, C N; Spanias, A; Kyriacou, E
2010-01-01
Advances in video compression, network technologies, and computer technologies have contributed to the rapid growth of mobile health (m-health) systems and services. Wide deployment of such systems and services is expected in the near future, and it's foreseen that they will soon be incorporated in daily clinical practice. This study focuses in describing the basic components of an end-to-end wireless medical video telemedicine system, providing a brief overview of the recent advances in the field, while it also highlights future trends in the design of telemedicine systems that are diagnostically driven.
Design and deployment of an elastic network test-bed in IHEP data center based on SDN
NASA Astrophysics Data System (ADS)
Zeng, Shan; Qi, Fazhi; Chen, Gang
2017-10-01
High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.
Citterio, M.; Camplani, A.; Cannon, M.; ...
2015-11-19
SRAM based Field Programmable Gate Arrays (FPGAs) have been rarely used in High Energy Physics (HEP) due to their sensitivity to radiation. The last generation of commercial FPGAs based on 28 nm feature size and on Silicon On Insulator (SOI) technologies are more tolerant to radiation to the level that their use in front-end electronics is now feasible. FPGAs provide re-programmability, high-speed computation and fast data transmission through the embedded serial transceivers. They could replace custom application specific integrated circuits in front end electronics in locations with moderate radiation field. Finally, the use of a FPGA in HEP experiments ismore » only limited by our ability to mitigate single event effects induced by the high energy hadrons present in the radiation field.« less
Systems Librarian and Automation Review.
ERIC Educational Resources Information Center
Schuyler, Michael
1992-01-01
Discusses software sharing on computer networks and the need for proper bandwidth; and describes the technology behind FidoNet, a computer network made up of electronic bulletin boards. Network features highlighted include front-end mailers, Zone Mail Hour, Nodelist, NetMail, EchoMail, computer conferences, tosser and scanner programs, and host…
High-performance computing-based exploration of flow control with micro devices.
Fujii, Kozo
2014-08-13
The dielectric barrier discharge (DBD) plasma actuator that controls flow separation is one of the promising technologies to realize energy savings and noise reduction of fluid dynamic systems. However, the mechanism for controlling flow separation is not clearly defined, and this lack of knowledge prevents practical use of this technology. Therefore, large-scale computations for the study of the DBD plasma actuator have been conducted using the Japanese Petaflops supercomputer 'K' for three different Reynolds numbers. Numbers of new findings on the control of flow separation by the DBD plasma actuator have been obtained from the simulations, and some of them are presented in this study. Knowledge of suitable device parameters is also obtained. The DBD plasma actuator is clearly shown to be very effective for controlling flow separation at a Reynolds number of around 10(5), and several times larger lift-to-drag ratio can be achieved at higher angles of attack after stall. For higher Reynolds numbers, separated flow is partially controlled. Flow analysis shows key features towards better control. DBD plasma actuators are a promising technology, which could reduce fuel consumption and contribute to a green environment by achieving high aerodynamic performance. The knowledge described above can be obtained only with high-end computers such as the supercomputer 'K'. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Hinke, Thomas H.
2004-01-01
Grid technology consists of middleware that permits distributed computations, data and sensors to be seamlessly integrated into a secure, single-sign-on processing environment. In &is environment, a user has to identify and authenticate himself once to the grid middleware, and then can utilize any of the distributed resources to which he has been,panted access. Grid technology allows resources that exist in enterprises that are under different administrative control to be securely integrated into a single processing environment The grid community has adopted commercial web services technology as a means for implementing persistent, re-usable grid services that sit on top of the basic distributed processing environment that grids provide. These grid services can then form building blocks for even more complex grid services. Each grid service is characterized using the Web Service Description Language, which provides a description of the interface and how other applications can access it. The emerging Semantic grid work seeks to associates sufficient semantic information with each grid service such that applications wii1 he able to automatically select, compose and if necessary substitute available equivalent services in order to assemble collections of services that are most appropriate for a particular application. Grid technology has been used to provide limited support to various Earth and space science applications. Looking to the future, this emerging grid service technology can provide a cyberinfrastructures for both the Earth and space science communities. Groups within these communities could transform those applications that have community-wide applicability into persistent grid services that are made widely available to their respective communities. In concert with grid-enabled data archives, users could easily create complex workflows that extract desired data from one or more archives and process it though an appropriate set of widely distributed grid services discovered using semantic grid technology. As required, high-end computational resources could be drawn from available grid resource pools. Using grid technology, this confluence of data, services and computational resources could easily be harnessed to transform data from many different sources into a desired product that is delivered to a user's workstation or to a web portal though which it could be accessed by its intended audience.
Cheng, Ji-Hong; Liu, Wen-Chun; Chang, Ting-Tsung; Hsieh, Sun-Yuan; Tseng, Vincent S
2017-10-01
Many studies have suggested that deletions of Hepatitis B Viral (HBV) are associated with the development of progressive liver diseases, even ultimately resulting in hepatocellular carcinoma (HCC). Among the methods for detecting deletions from next-generation sequencing (NGS) data, few methods considered the characteristics of virus, such as high evolution rates and high divergence among the different HBV genomes. Sequencing high divergence HBV genome sequences using the NGS technology outputs millions of reads. Thus, detecting exact breakpoints of deletions from these big and complex data incurs very high computational cost. We proposed a novel analytical method named VirDelect (Virus Deletion Detect), which uses split read alignment base to detect exact breakpoint and diversity variable to consider high divergence in single-end reads data, such that the computational cost can be reduced without losing accuracy. We use four simulated reads datasets and two real pair-end reads datasets of HBV genome sequence to verify VirDelect accuracy by score functions. The experimental results show that VirDelect outperforms the state-of-the-art method Pindel in terms of accuracy score for all simulated datasets and VirDelect had only two base errors even in real datasets. VirDelect is also shown to deliver high accuracy in analyzing the single-end read data as well as pair-end data. VirDelect can serve as an effective and efficient bioinformatics tool for physiologists with high accuracy and efficient performance and applicable to further analysis with characteristics similar to HBV on genome length and high divergence. The software program of VirDelect can be downloaded at https://sourceforge.net/projects/virdelect/. Copyright © 2017. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Katz, Yaacov J.
2002-01-01
Describes the development of the use of information and communication technology (ICT) in the Israeli educational system. Discusses a behaviorist approach to computer assisted instruction; open-ended courseware; constructivist approaches to multimedia, including simulations, modeling, and virtual reality; technology-based distance learning; and…
A chiral-based magnetic memory device without a permanent magnet
Dor, Oren Ben; Yochelis, Shira; Mathew, Shinto P.; Naaman, Ron; Paltiel, Yossi
2013-01-01
Several technologies are currently in use for computer memory devices. However, there is a need for a universal memory device that has high density, high speed and low power requirements. To this end, various types of magnetic-based technologies with a permanent magnet have been proposed. Recent charge-transfer studies indicate that chiral molecules act as an efficient spin filter. Here we utilize this effect to achieve a proof of concept for a new type of chiral-based magnetic-based Si-compatible universal memory device without a permanent magnet. More specifically, we use spin-selective charge transfer through a self-assembled monolayer of polyalanine to magnetize a Ni layer. This magnitude of magnetization corresponds to applying an external magnetic field of 0.4 T to the Ni layer. The readout is achieved using low currents. The presented technology has the potential to overcome the limitations of other magnetic-based memory technologies to allow fabricating inexpensive, high-density universal memory-on-chip devices. PMID:23922081
A chiral-based magnetic memory device without a permanent magnet.
Ben Dor, Oren; Yochelis, Shira; Mathew, Shinto P; Naaman, Ron; Paltiel, Yossi
2013-01-01
Several technologies are currently in use for computer memory devices. However, there is a need for a universal memory device that has high density, high speed and low power requirements. To this end, various types of magnetic-based technologies with a permanent magnet have been proposed. Recent charge-transfer studies indicate that chiral molecules act as an efficient spin filter. Here we utilize this effect to achieve a proof of concept for a new type of chiral-based magnetic-based Si-compatible universal memory device without a permanent magnet. More specifically, we use spin-selective charge transfer through a self-assembled monolayer of polyalanine to magnetize a Ni layer. This magnitude of magnetization corresponds to applying an external magnetic field of 0.4 T to the Ni layer. The readout is achieved using low currents. The presented technology has the potential to overcome the limitations of other magnetic-based memory technologies to allow fabricating inexpensive, high-density universal memory-on-chip devices.
P2P Technology for High-Performance Computing: An Overview
NASA Technical Reports Server (NTRS)
Follen, Gregory J. (Technical Monitor); Berry, Jason
2003-01-01
The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.
High-End Computing for Incompressible Flows
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin
2001-01-01
The objective of the First MIT Conference on Computational Fluid and Solid Mechanics (June 12-14, 2001) is to bring together industry and academia (and government) to nurture the next generation in computational mechanics. The objective of the current talk, 'High-End Computing for Incompressible Flows', is to discuss some of the current issues in large scale computing for mission-oriented tasks.
Information technologies for astrophysics circa 2001
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.
Resource Analysis of Cognitive Process Flow Used to Achieve Autonomy
2016-03-01
to be used as a decision - making aid to guide system designers and program managers not necessarily familiar with cognitive pro- cessing, or resource...implementing end-to-end cognitive processing flows multiplies and the impact of these design decisions on efficiency and effectiveness increases [1]. The...end-to-end cognitive systems and alternative computing technologies, then system design and acquisition personnel could make systematic analyses and
JINR cloud infrastructure evolution
NASA Astrophysics Data System (ADS)
Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.
2016-09-01
To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.
Fault tolerant computing: A preamble for assuring viability of large computer systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1977-01-01
The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.
(Computer) Vision without Sight
Manduchi, Roberto; Coughlan, James
2012-01-01
Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563
Challenges of Future High-End Computing
NASA Technical Reports Server (NTRS)
Bailey, David; Kutler, Paul (Technical Monitor)
1998-01-01
The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.
Iniaghe, Paschal O; Adie, Gilbert U
2015-11-01
Cathode ray tubes are image display units found in computer monitors and televisions. In recent years, cathode ray tubes have been generated as waste owing to the introduction of newer and advanced technologies in image displays, such as liquid crystal displays and high definition televisions, among others. Generation and subsequent disposal of end-of-life cathode ray tubes presents a challenge owing to increasing volumes and high lead content embedded in the funnel and neck sections of the glass. Disposal in landfills and open dumping are anti-environmental practices considering the large-scale contamination of environmental media by the potential of toxic metals leaching from glass. Mitigating such environmental contamination will require sound management strategies that are environmentally friendly and economically feasible. This review covers existing and emerging management practices for end-of-life cathode ray tubes. An in-depth analysis of available technologies (glass smelting, detoxification of cathode ray tube glass, lead extraction from cathode ray tube glass) revealed that most of the techniques are environmentally friendly, but are largely confined to either laboratory scale, or are often limited owing to high cost to mount, or generate secondary pollutants, while a closed-looped method is antiquated. However, recycling in cementitious systems (cement mortar and concrete) gives an added advantage in terms of quantity of recyclable cathode ray tube glass at a given time, with minimal environmental and economic implications. With significant quantity of waste cathode ray tube glass being generated globally, cementitious systems could be economically and environmentally acceptable as a sound management practice for cathode ray tube glass, where other technologies may not be applicable. © The Author(s) 2015.
Keeping Disability in Mind: A Case Study in Implantable Brain-Computer Interface Research.
Sullivan, Laura Specker; Klein, Eran; Brown, Tim; Sample, Matthew; Pham, Michelle; Tubig, Paul; Folland, Raney; Truitt, Anjali; Goering, Sara
2018-04-01
Brain-Computer Interface (BCI) research is an interdisciplinary area of study within Neural Engineering. Recent interest in end-user perspectives has led to an intersection with user-centered design (UCD). The goal of user-centered design is to reduce the translational gap between researchers and potential end users. However, while qualitative studies have been conducted with end users of BCI technology, little is known about individual BCI researchers' experience with and attitudes towards UCD. Given the scientific, financial, and ethical imperatives of UCD, we sought to gain a better understanding of practical and principled considerations for researchers who engage with end users. We conducted a qualitative interview case study with neural engineering researchers at a center dedicated to the creation of BCIs. Our analysis generated five themes common across interviews. The thematic analysis shows that participants identify multiple beneficiaries of their work, including other researchers, clinicians working with devices, device end users, and families and caregivers of device users. Participants value experience with device end users, and personal experience is the most meaningful type of interaction. They welcome (or even encourage) end-user input, but are skeptical of limited focus groups and case studies. They also recognize a tension between creating sophisticated devices and developing technology that will meet user needs. Finally, interviewees espouse functional, assistive goals for their technology, but describe uncertainty in what degree of function is "good enough" for individual end users. Based on these results, we offer preliminary recommendations for conducting future UCD studies in BCI and neural engineering.
Legal issues in clouds: towards a risk inventory.
Djemame, Karim; Barnitzke, Benno; Corrales, Marcelo; Kiran, Mariam; Jiang, Ming; Armstrong, Django; Forgó, Nikolaus; Nwankwo, Iheanyi
2013-01-28
Cloud computing technologies have reached a high level of development, yet a number of obstacles still exist that must be overcome before widespread commercial adoption can become a reality. In a cloud environment, end users requesting services and cloud providers negotiate service-level agreements (SLAs) that provide explicit statements of all expectations and obligations of the participants. If cloud computing is to experience widespread commercial adoption, then incorporating risk assessment techniques is essential during SLA negotiation and service operation. This article focuses on the legal issues surrounding risk assessment in cloud computing. Specifically, it analyses risk regarding data protection and security, and presents the requirements of an inherent risk inventory. The usefulness of such a risk inventory is described in the context of the OPTIMIS project.
Review on Microstructure Analysis of Metals and Alloys Using Image Analysis Techniques
NASA Astrophysics Data System (ADS)
Rekha, Suganthini; Bupesh Raja, V. K.
2017-05-01
The metals and alloys find vast application in engineering and domestic sectors. The mechanical properties of the metals and alloys are influenced by their microstructure. Hence the microstructural investigation is very critical. Traditionally the microstructure is studied using optical microscope with suitable metallurgical preparation. The past few decades the computers are applied in the capture and analysis of the optical micrographs. The advent of computer softwares like digital image processing and computer vision technologies are a boon to the analysis of the microstructure. In this paper the literature study of the various developments in the microstructural analysis, is done. The conventional optical microscope is complemented by the use of Scanning Electron Microscope (SEM) and other high end equipments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthik, Rajasekar
2014-01-01
In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less
NASA Technical Reports Server (NTRS)
2004-01-01
Since its founding in 1992, Global Science & Technology, Inc. (GST), of Greenbelt, Maryland, has been developing technologies and providing services in support of NASA scientific research. GST specialties include scientific analysis, science data and information systems, data visualization, communications, networking and Web technologies, computer science, and software system engineering. As a longtime contractor to Goddard Space Flight Center s Earth Science Directorate, GST scientific, engineering, and information technology staff have extensive qualifications with the synthesis of satellite, in situ, and Earth science data for weather- and climate-related projects. GST s experience in this arena is end-to-end, from building satellite ground receiving systems and science data systems, to product generation and research and analysis.
Health Monitoring System Technology Assessments: Cost Benefits Analysis
NASA Technical Reports Server (NTRS)
Kent, Renee M.; Murphy, Dennis A.
2000-01-01
The subject of sensor-based structural health monitoring is very diverse and encompasses a wide range of activities including initiatives and innovations involving the development of advanced sensor, signal processing, data analysis, and actuation and control technologies. In addition, it embraces the consideration of the availability of low-cost, high-quality contributing technologies, computational utilities, and hardware and software resources that enable the operational realization of robust health monitoring technologies. This report presents a detailed analysis of the cost benefit and other logistics and operational considerations associated with the implementation and utilization of sensor-based technologies for use in aerospace structure health monitoring. The scope of this volume is to assess the economic impact, from an end-user perspective, implementation health monitoring technologies on three structures. It specifically focuses on evaluating the impact on maintaining and supporting these structures with and without health monitoring capability.
High Tech: A Place in Our Lives and in Our Schools.
ERIC Educational Resources Information Center
Roach, John V.
1986-01-01
Discusses various aspects of high technology: computers in cars, computer-assisted design and manufacturing, computers in telephones, video recorders, laser technology, home computers, job training, computer education, and the challenge to the technology teacher. (CT)
2013-03-01
within the Global information Grid ( GiG ) (AFDD6-0, 2011). JP 1-02 describes the GiG : 10 The GIG is the globally interconnected, end-to-end set of...to warfighters, policy makers, and support personnel. The GIG includes all owned and leased communications and computing systems and services...software (including applications), data, security services, and other 19 associated services necessary to achieve information superiority. The GIG
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Barclay, Rebecca O.; Bishop, Ann P.; Kennedy, John M.
1992-01-01
Federal attempts to stimulate technological innovation have been unsuccessful because of the application of an inappropriate policy framework that lacks conceptual and empirical knowledge of the process of technological innovation and fails to acknowledge the relationship between knowled reproduction, transfer, and use as equally important components of the process of knowledge diffusion. It is argued that the potential contributions of high-speed computing and networking systems will be diminished unless empirically derived knowledge about the information-seeking behavior of the members of the social system is incorporated into a new policy framework. Findings from the NASA/DoD Aerospace Knowledge Diffusion Research Project are presented in support of this assertion.
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Barclay, Rebecca O.; Bishop, Ann P.; Kennedy, John M.
1992-01-01
Federal attempts to stimulate technological innovation have been unsuccessful because of the application of an inappropriate policy framework that lacks conceptual and empirical knowledge of the process of technological innovation and fails to acknowledge the relationship between knowledge production, transfer, and use as equally important components of the process of knowledge diffusion. This article argues that the potential contributions of high-speed computing and networking systems will be diminished unless empirically derived knowledge about the information-seeking behavior of members of the social system is incorporated into a new policy framework. Findings from the NASA/DoD Aerospace Knowledge Diffusion Research Project are presented in support of this assertion.
The use of PC based VR in clinical medicine: the VREPAR projects.
Riva, G; Bacchetta, M; Baruffi, M; Borgomainerio, E; Defrance, C; Gatti, F; Galimberti, C; Fontaneto, S; Marchi, S; Molinari, E; Nugues, P; Rinaldi, S; Rovetta, A; Ferretti, G S; Tonci, A; Wann, J; Vincelli, F
1999-01-01
Virtual reality (VR) is an emerging technology that alters the way individuals interact with computers: a 3D computer-generated environment in which a person can move about and interact as if he actually was inside it. Given to the high computational power required to create virtual environments, these are usually developed on expensive high-end workstations. However, the significant advances in PC hardware that have been made over the last three years, are making PC-based VR a possible solution for clinical assessment and therapy. VREPAR - Virtual Reality Environments for Psychoneurophysiological Assessment and Rehabilitation - are two European Community funded projects (Telematics for health - HC 1053/HC 1055 - http://www.psicologia.net) that are trying to develop a modular PC-based virtual reality system for the medical market. The paper describes the rationale of the developed modules and the preliminary results obtained.
End-User Use of Data Base Query Language: Pros and Cons.
ERIC Educational Resources Information Center
Nicholes, Walter
1988-01-01
Man-machine interface, the concept of a computer "query," a review of database technology, and a description of the use of query languages at Brigham Young University are discussed. The pros and cons of end-user use of database query languages are explored. (Author/MLW)
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.
2006-05-01
We live in an era of an unprecedented data volumes, multidisciplinary analysis and synthesis, and active, learner-centered education emphasis. For instance, a new generation of satellite instruments is being designed for GOES-R and NPOESS programs to deliver terabytes of data each day. Similarly, high-resolution, coupled models run over a wide range of temporal scales are generating data at unprecedented rates. Complex environmental problems such as El Nino/Southern Oscillation, climate change, and water cycle transcend not only disciplinary but also geographic boundaries, with their impacts and implications touching every region and community of the world. The understanding and solution to these inherently global scientific and social problems requires integrated observations that cover all areas of the globe, international sharing and flow of data, and earth system science approaches. Contemporary education strategies recommend adopting an Earth system science approach for teaching the geosciences, employing new pedagogical techniques such as enquiry-based learning and hands-on activities. Needless to add, today's education and research enterprise depends heavily on easy to use, robust, flexible and scalable cyberinfrastructure, especially on the ready availability of quality data and appropriate tools to manipulate and integrate those data. Fortunately, rapid advances in computing, communication and information technologies have provided solutions that can are being applied to advance teaching, research, and service. The exponential growth in the use of the Internet in education and research, largely due to the advent of the World Wide Web, is well documented. On the other hand, how other technological and community trends have shaped the development and application of cyberinfrastructure, especially in the data services area, is less well understood. For example, the computing industry is converging on an approach called Web services that enables a standard and yet revolutionary way of building applications and methods to connect and exchange information over the Web. This new approach, based on XML - a widely accepted format for exchanging data and corresponding semantics over the Internet - enables applications, computer systems, and information processes to work together in fundamentally different ways. Likewise, the advent of digital libraries, grid computing platforms, interoperable frameworks, standards and protocols, open-source software, and community atmospheric models have been important drivers in shaping the use of a new generation of end-to-end cyberinfrastructure for solving some of the most challenging scientific and educational problems. In this talk, I will present an overview of the scientific, technological, and educational landscape, discuss recent developments in cyberinfrastructure, and Unidata's role in and vision for providing easy-to use, robust, end-to-end data services for solving geoscientific problems and advancing student learning.
A virtual computer lab for distance biomedical technology education.
Locatis, Craig; Vega, Anibal; Bhagwat, Medha; Liu, Wei-Li; Conde, Jose
2008-03-13
The National Library of Medicine's National Center for Biotechnology Information offers mini-courses which entail applying concepts in biochemistry and genetics to search genomics databases and other information sources. They are highly interactive and involve use of 3D molecular visualization software that can be computationally taxing. Methods were devised to offer the courses at a distance so as to provide as much functionality of a computer lab as possible, the venue where they are normally taught. The methods, which can be employed with varied videoconferencing technology and desktop sharing software, were used to deliver mini-courses at a distance in pilot applications where students could see demonstrations by the instructor and the instructor could observe and interact with students working at their remote desktops. Student ratings of the learning experience and comments to open ended questions were similar to those when the courses are offered face to face. The real time interaction and the instructor's ability to access student desktops from a distance in order to provide individual assistance and feedback were considered invaluable. The technologies and methods mimic much of the functionality of computer labs and may be usefully applied in any context where content changes frequently, training needs to be offered on complex computer applications at a distance in real time, and where it is necessary for the instructor to monitor students as they work.
Information technologies for astrophysics circa 2001
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1991-01-01
It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include miniaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is less easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large data sets. Three limiting paradigms are as follows: saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage, and retrieval off the shelf; and the linear model of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.
ERIC Educational Resources Information Center
Pinelli, Thomas E.; And Others
1992-01-01
Discusses U.S. technology policy and the transfer of scientific and technical information (STI). Results of a study of knowledge diffusion in the aerospace industry are reported, including data on aerospace information intermediaries, use of computer and information technologies, and the use of NASA (National Aeronautics and Space Administration)…
Simplified Distributed Computing
NASA Astrophysics Data System (ADS)
Li, G. G.
2006-05-01
The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.
Experiences of Student Mathematics-Teachers in Computer-Based Mathematics Learning Environment
ERIC Educational Resources Information Center
Karatas, Ilhan
2011-01-01
Computer technology in mathematics education enabled the students find many opportunities for investigating mathematical relationships, hypothesizing, and making generalizations. These opportunities were provided to pre-service teachers through a faculty course. At the end of the course, the teachers were assigned project tasks involving…
7 CFR 2.98 - Director, Management Services.
Code of Federal Regulations, 2011 CFR
2011-01-01
... management services; information technology services related to end user office automation, desktop computers, enterprise networking support, handheld devices and voice telecommunications; with authority to take actions...
7 CFR 2.98 - Director, Management Services.
Code of Federal Regulations, 2013 CFR
2013-01-01
... management services; information technology services related to end user office automation, desktop computers, enterprise networking support, handheld devices and voice telecommunications; with authority to take actions...
7 CFR 2.98 - Director, Management Services.
Code of Federal Regulations, 2012 CFR
2012-01-01
... management services; information technology services related to end user office automation, desktop computers, enterprise networking support, handheld devices and voice telecommunications; with authority to take actions...
ERIC Educational Resources Information Center
Cohen, Daniel J.; Rosenzweig, Roy
2006-01-01
The combination of the Web and the cell phone forecasts the end of the inexpensive technologies of multiple-choice tests and grading machines. These technological developments are likely to bring the multiple-choice test to the verge of obsolescence, mounting a substantial challenge to the presentation of history and other disciplines.
2014-09-01
becoming a more and more prevalent technology in the business world today. According to Syal and Goswami (2012), cloud technology is seen as a...use of computing resources, applications, and personal files without reliance on a single computer or system ( Syal & Goswami, 2012). By operating in...cloud services largely being web-based, which can be retrieved through most systems with access to the Internet ( Syal & Goswami, 2012). The end user can
ERIC Educational Resources Information Center
National Center for State Courts, Williamsburg, VA.
This report summarizes the findings of the Computer-Aided Transcription (CAT) Project, which conducted a 14-month study of the technology and use of computer systems for translating into English the shorthand notes taken by court reporters on stenotype machines. Included are the state of the art of CAT at the end of 1980 and anticipated future…
Computers in Public Schools: Changing the Image with Image Processing.
ERIC Educational Resources Information Center
Raphael, Jacqueline; Greenberg, Richard
1995-01-01
The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…
1991-05-01
Marine Corps Tiaining Systems (CBESS) memorization training Inteligence Center, Dam Neck Threat memorization training Commander Tactical Wings, Atlantic...News Shipbuilding Technical training AEGIS Training Center, Dare Artificial Intelligence (Al) Tools Computerized firm-end analysis tools NETSCPAC...Technology Department and provides computational and electronic mail support for research in areas of artificial intelligence, computer-assisted instruction
Shalf, John M.; Leland, Robert
2015-12-01
Here, photolithography systems are on pace to reach atomic scale by the mid-2020s, necessitating alternatives to continue realizing faster, more predictable, and cheaper computing performance. If the end of Moore's law is real, a research agenda is needed to assess the viability of novel semiconductor technologies and navigate the ensuing challenges.
26 CFR 1.168(j)-1T - Questions and answers concerning tax-exempt entity leasing rules (temporary).
Code of Federal Regulations, 2011 CFR
2011-04-01
... technological equipment” means (1) any computer or peripheral equipment, (2) any high technology telephone..., electromechanical, or computer-based high technology equipment which is tangible personal property used in the... before the expiration of its physical useful life. High technology medical equipment may include computer...
Russ, Alissa L; Saleem, Jason J
2018-02-01
The quality of usability testing is highly dependent upon the associated usability scenarios. To promote usability testing as part of electronic health record (EHR) certification, the Office of the National Coordinator (ONC) for Health Information Technology requires that vendors test specific capabilities of EHRs with clinical end-users and report their usability testing process - including the test scenarios used - along with the results. The ONC outlines basic expectations for usability testing, but there is little guidance in usability texts or scientific literature on how to develop usability scenarios for healthcare applications. The objective of this article is to outline key factors to consider when developing usability scenarios and tasks to evaluate computer-interface based health information technologies. To achieve this goal, we draw upon a decade of our experience conducting usability tests with a variety of healthcare applications and a wide range of end-users, to include healthcare professionals as well as patients. We discuss 10 key factors that influence scenario development: objectives of usability testing; roles of end-user(s); target performance goals; evaluation time constraints; clinical focus; fidelity; scenario-related bias and confounders; embedded probes; minimize risks to end-users; and healthcare related outcome measures. For each factor, we present an illustrative example. This article is intended to aid usability researchers and practitioners in their efforts to advance health information technologies. The article provides broad guidance on usability scenario development and can be applied to a wide range of clinical information systems and applications. Published by Elsevier Inc.
The Architecture of Information at Plateau Beaubourg
ERIC Educational Resources Information Center
Branda, Ewan Edward
2012-01-01
During the course of the 1960s, computers and information networks made their appearance in the public imagination. To architects on the cusp of architecture's postmodern turn, information technology offered new forms, metaphors, and techniques by which modern architecture's technological and utopian basis could be reasserted. Yet by the end of…
Electronic Technologies and Preservation.
ERIC Educational Resources Information Center
Waters, Donald J.
Digital imaging technology, which is used to take a computer picture of documents at the page level, has significant potential as a tool for preserving deteriorating library materials. Multiple reproductions can be made without loss of quality; the end product is compact; reproductions can be made in paper, microfilm, or CD-ROM; and access over…
Models and Methodologies for Multimedia Courseware Production.
ERIC Educational Resources Information Center
Barker, Philip; Giller, Susan
Many new technologies are now available for delivering and/or providing access to computer-based learning (CBL) materials. These technologies vary in sophistication in many important ways, depending upon the bandwidth that they provide, the interactivity that they offer and the types of end-user connectivity that they support.Invariably,…
Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander
2015-01-01
Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sims, Benjamin
Think of some examples of repair in everyday life. Maybe you had a car accident and took your car to the body shop. Maybe the head came off your child’s doll and you had to glue it back on. Maybe the handle of your shovel cracked and you wrapped the cracked area with duct tape to hold it together. These are examples of what could be called reactive repair, where an unexpected accident initiates a sequence of action and decision-making that ends in repair. In these cases, most of the thinking and planning surrounding repair takes place after a breakdownmore » has been identified. This type of repair is often taken to be distinct from deliberate design, as it occurs within the context of technology that is already in operation, often has an improvisational character, and may be performed by end users or technicians rather than credentialed experts. But does repair always have to be reactive? And if not, what does this tell us about the distinction between design and repair, and their respective roles in shaping technological change? The short answer is that repair, like design, can play a dynamic and forward-looking role in shaping technological trajectories – not only stabilizing existing systems, but anticipating change and generating new technological futures.« less
Sims, Benjamin
2017-05-23
Think of some examples of repair in everyday life. Maybe you had a car accident and took your car to the body shop. Maybe the head came off your child’s doll and you had to glue it back on. Maybe the handle of your shovel cracked and you wrapped the cracked area with duct tape to hold it together. These are examples of what could be called reactive repair, where an unexpected accident initiates a sequence of action and decision-making that ends in repair. In these cases, most of the thinking and planning surrounding repair takes place after a breakdownmore » has been identified. This type of repair is often taken to be distinct from deliberate design, as it occurs within the context of technology that is already in operation, often has an improvisational character, and may be performed by end users or technicians rather than credentialed experts. But does repair always have to be reactive? And if not, what does this tell us about the distinction between design and repair, and their respective roles in shaping technological change? The short answer is that repair, like design, can play a dynamic and forward-looking role in shaping technological trajectories – not only stabilizing existing systems, but anticipating change and generating new technological futures.« less
Context Aware Systems, Methods and Trends in Smart Home Technology
NASA Astrophysics Data System (ADS)
Robles, Rosslin John; Kim, Tai-Hoon
Context aware applications respond and adapt to changes in the computing environment. It is the concept of leveraging information about the end user to improve the quality of the interaction. New technologies in context-enriched services will use location, presence, social attributes, and other environmental information to anticipate an end user's immediate needs, offering more-sophisticated, situation-aware and usable functions. Smart homes connect all the devices and appliances in your home so they can communicate with each other and with you. Context-awareness can be applied to Smart Home technology. In this paper, we discuss the context-aware tools for development of Smart Home Systems.
A multitasking, multisinked, multiprocessor data acquisition front end
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, R.; Au, R.; Molen, A.V.
1989-10-01
The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code.
Automated inspection of turbine blades: Challenges and opportunities
NASA Technical Reports Server (NTRS)
Mehta, Manish; Marron, Joseph C.; Sampson, Robert E.; Peace, George M.
1994-01-01
Current inspection methods for complex shapes and contours exemplified by aircraft engine turbine blades are expensive, time-consuming and labor intensive. The logistics support of new manufacturing paradigms such as integrated product-process development (IPPD) for current and future engine technology development necessitates high speed, automated inspection of forged and cast jet engine blades, combined with a capability of retaining and retrieving metrology data for process improvements upstream (designer-level) and downstream (end-user facilities) at commercial and military installations. The paper presents the opportunities emerging from a feasibility study conducted using 3-D holographic laser radar in blade inspection. Requisite developments in computing technologies for systems integration of blade inspection in production are also discussed.
Sabti, Ahmed Abdulateef; Chaichan, Rasha Sami
2014-01-01
This study examines the attitudes of Saudi Arabian high school students toward the use of computer technologies in learning English. The study also discusses the possible barriers that affect and limit the actual usage of computers. Quantitative approach is applied in this research, which involved 30 Saudi Arabia students of a high school in Kuala Lumpur, Malaysia. The respondents comprised 15 males and 15 females with ages between 16 years and 18 years. Two instruments, namely, Scale of Attitude toward Computer Technologies (SACT) and Barriers affecting Students' Attitudes and Use (BSAU) were used to collect data. The Technology Acceptance Model (TAM) of Davis (1989) was utilized. The analysis of the study revealed gender differences in attitudes toward the use of computer technologies in learning English. Female students showed high and positive attitudes towards the use of computer technologies in learning English than males. Both male and female participants demonstrated high and positive perception of Usefulness and perceived Ease of Use of computer technologies in learning English. Three barriers that affected and limited the use of computer technologies in learning English were identified by the participants. These barriers are skill, equipment, and motivation. Among these barriers, skill had the highest effect, whereas motivation showed the least effect.
SAMICS marketing and distribution model
NASA Technical Reports Server (NTRS)
1978-01-01
A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.
ERIC Educational Resources Information Center
Zoanetti, Nathan; Les, Magdalena; Leigh-Lancaster, David
2014-01-01
From 2011-2013 the VCAA conducted a trial aligning the use of computers in curriculum, pedagogy and assessment culminating in a group of 62 volunteer students sitting their end of Year 12 technology-active Mathematical Methods (CAS) Examination 2 as a computer-based examination. This paper reports on statistical modelling undertaken to compare the…
Generic Software for Emulating Multiprocessor Architectures.
1985-05-01
RD-A157 662 GENERIC SOFTWARE FOR EMULATING MULTIPROCESSOR 1/2 AlRCHITECTURES(J) MASSACHUSETTS INST OF TECH CAMBRIDGE U LRS LAB FOR COMPUTER SCIENCE R...AREA & WORK UNIT NUMBERS MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 ____________ I I. CONTROLLING OFFICE NAME AND...aide If neceeasy end Identify by block number) Computer architecture, emulation, simulation, dataf low 20. ABSTRACT (Continue an reverse slde It
Enhancing Care of Aged and Dying Prisoners: Is e-Learning a Feasible Approach?
Loeb, Susan J; Penrod, Janice; Myers, Valerie H; Baney, Brenda L; Strickfaden, Sophia M; Kitt-Lewis, Erin; Wion, Rachel K
Prisons and jails are facing sharply increased demands in caring for aged and dying inmates. Our Toolkit for Enhancing End-of-life Care in Prisons effectively addressed end-of-life (EOL) care; however, geriatric content was limited, and the product was not formatted for broad dissemination. Prior research adapted best practices in EOL care and aging; but, delivery methods lacked emerging technology-focused learning and interactivity. Our purposes were to uncover current training approaches and preferences and to ascertain the technological capacity of correctional settings to deliver computer-based and other e-learning training. An environmental scan was conducted with 11 participants from U.S. prisons and jails to ensure proper fit, in terms of content and technology capacity, between an envisioned computer-based training product and correctional settings. Environmental scan findings focused on content of training, desirable qualities of training, prominence of "homegrown" products, and feasibility of commercial e-learning. This study identified qualities of training programs to adopt and pitfalls to avoid and revealed technology-related issues to be mindful of when designing computer-based training for correctional settings, and participants spontaneously expressed an interest in geriatrics and EOL training using this learning modality as long as training allowed for tailoring of materials.
NASA's Climate Data Services Initiative
NASA Astrophysics Data System (ADS)
McInerney, M.; Duffy, D.; Schnase, J. L.; Webster, W. P.
2013-12-01
Our understanding of the Earth's processes is based on a combination of observational data records and mathematical models. The size of NASA's space-based observational data sets is growing dramatically as new missions come online. However a potentially bigger data challenge is posed by the work of climate scientists, whose models are regularly producing data sets of hundreds of terabytes or more. It is important to understand that the 'Big Data' challenge of climate science cannot be solved with a single technological approach or an ad hoc assemblage of technologies. It will require a multi-faceted, well-integrated suite of capabilities that include cloud computing, large-scale compute-storage systems, high-performance analytics, scalable data management, and advanced deployment mechanisms in addition to the existing, well-established array of mature information technologies. It will also require a coherent organizational effort that is able to focus on the specific and sometimes unique requirements of climate science. Given that it is the knowledge that is gained from data that is of ultimate benefit to society, data publication and data analytics will play a particularly important role. In an effort to accelerate scientific discovery and innovation through broader use of climate data, NASA Goddard Space Flight Center's Office of Computational and Information Sciences and Technology has embarked on a determined effort to build a comprehensive, integrated data publication and analysis capability for climate science. The Climate Data Services (CDS) Initiative integrates people, expertise, and technology into a highly-focused, next-generation, one-stop climate science information service. The CDS Initiative is providing the organizational framework, processes, and protocols needed to deploy existing information technologies quickly using a combination of enterprise-level services and an expanding array of cloud services. Crucial to its effectiveness, the CDS Initiative is developing the technical expertise to move new information technologies from R&D into operational use. This combination enables full, end-to-end support for climate data publishing and data analytics, and affords the flexibility required to meet future and unanticipated needs. Current science efforts being supported by the CDS Initiative include IPPC, OBS4MIP, ANA4MIPS, MERRA II, National Climate Assessment, the Ocean Data Assimilation project, NASA Earth Exchange (NEX), and the RECOVER Burned Area Emergency Response decision support system. Service offerings include an integrated suite of classic technologies (FTP, LAS, THREDDS, ESGF, GRaD-DODS, OPeNDAP, WMS, ArcGIS Server), emerging technologies (iRODS, UVCDAT), and advanced technologies (MERRA Analytic Services, MapReduce, Ontology Services, and the CDS API). This poster will describe the CDS Initiative, provide details about the Initiative's advanced offerings, and layout the CDS Initiative's deployment roadmap.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
The Chemical Engineer's Toolbox: A Glass Box Approach to Numerical Problem Solving
ERIC Educational Resources Information Center
Coronell, Daniel G.; Hariri, M. Hossein
2009-01-01
Computer programming in undergraduate engineering education all too often begins and ends with the freshman programming course. Improvements in computer technology and curriculum revision have improved this situation, but often at the expense of the students' learning due to the use of commercial "black box" software. This paper describes the…
NASA Astrophysics Data System (ADS)
Alimi, Isiaka A.; Monteiro, Paulo P.; Teixeira, António L.
2017-11-01
The key paths toward the fifth generation (5G) network requirements are towards centralized processing and small-cell densification systems that are implemented on the cloud computing-based radio access networks (CC-RANs). The increasing recognitions of the CC-RANs can be attributed to their valuable features regarding system performance optimization and cost-effectiveness. Nevertheless, realization of the stringent requirements of the fronthaul that connects the network elements is highly demanding. In this paper, considering the small-cell network architectures, we present multiuser mixed radio-frequency/free-space optical (RF/FSO) relay networks as feasible technologies for the alleviation of the stringent requirements in the CC-RANs. In this study, we use the end-to-end (e2e) outage probability, average symbol error probability (ASEP), and ergodic channel capacity as the performance metrics in our analysis. Simulation results show the suitability of deployment of mixed RF/FSO schemes in the real-life scenarios.
Toward a first-principles integrated simulation of tokamak edge plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C S; Klasky, Scott A; Cummings, Julian
2008-01-01
Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less
Nucleic Acid-Based Nanodevices in Biological Imaging.
Chakraborty, Kasturi; Veetil, Aneesh T; Jaffrey, Samie R; Krishnan, Yamuna
2016-06-02
The nanoscale engineering of nucleic acids has led to exciting molecular technologies for high-end biological imaging. The predictable base pairing, high programmability, and superior new chemical and biological methods used to access nucleic acids with diverse lengths and in high purity, coupled with computational tools for their design, have allowed the creation of a stunning diversity of nucleic acid-based nanodevices. Given their biological origin, such synthetic devices have a tremendous capacity to interface with the biological world, and this capacity lies at the heart of several nucleic acid-based technologies that are finding applications in biological systems. We discuss these diverse applications and emphasize the advantage, in terms of physicochemical properties, that the nucleic acid scaffold brings to these contexts. As our ability to engineer this versatile scaffold increases, its applications in structural, cellular, and organismal biology are clearly poised to massively expand.
Käthner, Ivo; Halder, Sebastian; Hintermüller, Christoph; Espinosa, Arnau; Guger, Christoph; Miralles, Felip; Vargiu, Eloisa; Dauwalder, Stefan; Rafael-Palou, Xavier; Solà, Marc; Daly, Jean M.; Armstrong, Elaine; Martin, Suzanne; Kübler, Andrea
2017-01-01
Current brain-computer interface (BCIs) software is often tailored to the needs of scientists and technicians and therefore complex to allow for versatile use. To facilitate home use of BCIs a multifunctional P300 BCI with a graphical user interface intended for non-expert set-up and control was designed and implemented. The system includes applications for spelling, web access, entertainment, artistic expression and environmental control. In addition to new software, it also includes new hardware for the recording of electroencephalogram (EEG) signals. The EEG system consists of a small and wireless amplifier attached to a cap that can be equipped with gel-based or dry contact electrodes. The system was systematically evaluated with a healthy sample, and targeted end users of BCI technology, i.e., people with a varying degree of motor impairment tested the BCI in a series of individual case studies. Usability was assessed in terms of effectiveness, efficiency and satisfaction. Feedback of users was gathered with structured questionnaires. Two groups of healthy participants completed an experimental protocol with the gel-based and the dry contact electrodes (N = 10 each). The results demonstrated that all healthy participants gained control over the system and achieved satisfactory to high accuracies with both gel-based and dry electrodes (average error rates of 6 and 13%). Average satisfaction ratings were high, but certain aspects of the system such as the wearing comfort of the dry electrodes and design of the cap, and speed (in both groups) were criticized by some participants. Six potential end users tested the system during supervised sessions. The achieved accuracies varied greatly from no control to high control with accuracies comparable to that of healthy volunteers. Satisfaction ratings of the two end-users that gained control of the system were lower as compared to healthy participants. The advantages and disadvantages of the BCI and its applications are discussed and suggestions are presented for improvements to pave the way for user friendly BCIs intended to be used as assistive technology by persons with severe paralysis. PMID:28588442
Käthner, Ivo; Halder, Sebastian; Hintermüller, Christoph; Espinosa, Arnau; Guger, Christoph; Miralles, Felip; Vargiu, Eloisa; Dauwalder, Stefan; Rafael-Palou, Xavier; Solà, Marc; Daly, Jean M; Armstrong, Elaine; Martin, Suzanne; Kübler, Andrea
2017-01-01
Current brain-computer interface (BCIs) software is often tailored to the needs of scientists and technicians and therefore complex to allow for versatile use. To facilitate home use of BCIs a multifunctional P300 BCI with a graphical user interface intended for non-expert set-up and control was designed and implemented. The system includes applications for spelling, web access, entertainment, artistic expression and environmental control. In addition to new software, it also includes new hardware for the recording of electroencephalogram (EEG) signals. The EEG system consists of a small and wireless amplifier attached to a cap that can be equipped with gel-based or dry contact electrodes. The system was systematically evaluated with a healthy sample, and targeted end users of BCI technology, i.e., people with a varying degree of motor impairment tested the BCI in a series of individual case studies. Usability was assessed in terms of effectiveness, efficiency and satisfaction. Feedback of users was gathered with structured questionnaires. Two groups of healthy participants completed an experimental protocol with the gel-based and the dry contact electrodes ( N = 10 each). The results demonstrated that all healthy participants gained control over the system and achieved satisfactory to high accuracies with both gel-based and dry electrodes (average error rates of 6 and 13%). Average satisfaction ratings were high, but certain aspects of the system such as the wearing comfort of the dry electrodes and design of the cap, and speed (in both groups) were criticized by some participants. Six potential end users tested the system during supervised sessions. The achieved accuracies varied greatly from no control to high control with accuracies comparable to that of healthy volunteers. Satisfaction ratings of the two end-users that gained control of the system were lower as compared to healthy participants. The advantages and disadvantages of the BCI and its applications are discussed and suggestions are presented for improvements to pave the way for user friendly BCIs intended to be used as assistive technology by persons with severe paralysis.
A Fast lattice-based polynomial digital signature system for m-commerce
NASA Astrophysics Data System (ADS)
Wei, Xinzhou; Leung, Lin; Anshel, Michael
2003-01-01
The privacy and data integrity are not guaranteed in current wireless communications due to the security hole inside the Wireless Application Protocol (WAP) version 1.2 gateway. One of the remedies is to provide an end-to-end security in m-commerce by applying application level security on top of current WAP1.2. The traditional security technologies like RSA and ECC applied on enterprise's server are not practical for wireless devices because wireless devices have relatively weak computation power and limited memory compared with server. In this paper, we developed a lattice based polynomial digital signature system based on NTRU's Polynomial Authentication and Signature Scheme (PASS), which enabled the feasibility of applying high-level security on both server and wireless device sides.
Blood Pump Development Using Rocket Engine Flow Simulation Technology
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin
2001-01-01
This paper reports the progress made towards developing complete blood flow simulation capability in humans, especially in the presence of artificial devices such as valves and ventricular assist devices. Devices modeling poses unique challenges different from computing the blood flow in natural hearts and arteries. There are many elements needed to quantify the flow in these devices such as flow solvers, geometry modeling including flexible walls, moving boundary procedures and physiological characterization of blood. As a first step, computational technology developed for aerospace applications was extended to the analysis and development of a ventricular assist device (VAD), i.e., a blood pump. The blood flow in a VAD is practically incompressible and Newtonian, and thus an incompressible Navier-Stokes solution procedure can be applied. A primitive variable formulation is used in conjunction with the overset grid approach to handle complex moving geometry. The primary purpose of developing the incompressible flow analysis capability was to quantify the flow in advanced turbopump for space propulsion system. The same procedure has been extended to the development of NASA-DeBakey VAD that is based on an axial blood pump. Due to massive computing requirements, high-end computing is necessary for simulating three-dimensional flow in these pumps. Computational, experimental, and clinical results are presented.
Ad Hoc modeling, expert problem solving, and R&T program evaluation
NASA Technical Reports Server (NTRS)
Silverman, B. G.; Liebowitz, J.; Moustakis, V. S.
1983-01-01
A simplified cost and time (SCAT) analysis program utilizing personal-computer technology is presented and demonstrated in the case of the NASA-Goddard end-to-end data system. The difficulties encountered in implementing complex program-selection and evaluation models in the research and technology field are outlined. The prototype SCAT system described here is designed to allow user-friendly ad hoc modeling in real time and at low cost. A worksheet constructed on the computer screen displays the critical parameters and shows how each is affected when one is altered experimentally. In the NASA case, satellite data-output and control requirements, ground-facility data-handling capabilities, and project priorities are intricately interrelated. Scenario studies of the effects of spacecraft phaseout or new spacecraft on throughput and delay parameters are shown. The use of a network of personal computers for higher-level coordination of decision-making processes is suggested, as a complement or alternative to complex large-scale modeling.
Realizing universal Majorana fermionic quantum computation
NASA Astrophysics Data System (ADS)
Wu, Ya-Jie; He, Jing; Kou, Su-Peng
2014-08-01
Majorana fermionic quantum computation (MFQC) was proposed by S. B. Bravyi and A. Yu. Kitaev [Ann. Phys. (NY) 298, 210 (2002), 10.1006/aphy.2002.6254], who indicated that a (nontopological) fault-tolerant quantum computer built from Majorana fermions may be more efficient than that built from distinguishable two-state systems. However, until now scientists have not known how to realize a MFQC in a physical system. In this paper we propose a possible realization of MFQC. We find that the end of a line defect of a p-wave superconductor or superfluid in a honeycomb lattice traps a Majorana zero mode, which becomes the starting point of MFQC. Then we show how to manipulate Majorana fermions to perform universal MFQC, which possesses possibilities for high-level local controllability through individually addressing the quantum states of individual constituent elements by using timely cold-atom technology.
Computer Access. Tech Use Guide: Using Computer Technology.
ERIC Educational Resources Information Center
Council for Exceptional Children, Reston, VA. Center for Special Education Technology.
One of nine brief guides for special educators on using computer technology, this guide focuses on access including adaptations in input devices, output devices, and computer interfaces. Low technology devices include "no-technology" devices (usually modifications to existing devices), simple switches, and multiple switches. High technology input…
Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.
Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan
2011-11-01
Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).
Opera: Reconstructing Optimal Genomic Scaffolds with High-Throughput Paired-End Sequences
Gao, Song; Sung, Wing-Kin
2011-01-01
Abstract Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/). PMID:21929371
Bigdata Driven Cloud Security: A Survey
NASA Astrophysics Data System (ADS)
Raja, K.; Hanifa, Sabibullah Mohamed
2017-08-01
Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.
Towards end to end technology modeling: Carbon nanotube and thermoelectric devices
NASA Astrophysics Data System (ADS)
Salamat, Shuaib
The goal of this work is to demonstrate the feasibility of end-to-end ("atoms to applications") technology modeling. Two different technologies were selected to drive this work. The first technology is carbon nanotube field-effect transistors (CNTFETs), and the goal is to model device level variability and identify the origin of variations in these devices. Recently, there has been significant progress in understanding the physics of carbon nanotube electronic devices and in identifying their potential applications. For nanotubes, the carrier mobility is high, so low bias transport across several hundred nanometers is nearly ballistic, and the deposition of high-k gate dielectrics does not degrade the carrier mobility. The conduction and valence bands are symmetric (useful for complimentary application) and the bandstructure is direct (enables optical emission). Because of these striking features, carbon nanotubes (CNTs) have received much attention. Carbon nanotubes field-effect transistors (CNTFETs) are one of the main potential candidates for large-area electronics. In this research model, systematic simulation approaches are applied to understand the intrinsic performance variability in CNTFETs. It is shown that control over diameter distribution is critically important process parameter for attaining high performance transistors and circuits with characteristics rivaling those of state-of-the-art Si technology. The second technology driver concerns the development of a multi-scale framework for thermoelectric device design. An essential step in the development of new materials and devices for thermoelectrics is to develop accurate, efficient, and realistic models. The ready availability of user friendly ab-initio codes and the ever-increasing computing power have made the band structure calculations routine. Thermoelectric device design, however, is still largely done at the effective mass level. Tools that allow device designers to make use of sophisticated electronic structure and phonon dispersion calculations are needed. We have developed a proof-of-concept, integrated, multi-scale design framework for TE technology. Beginning from full electronic and phonon dispersions, Landauer approach is used to evaluate the temperature-dependent thermoelectric transport parameters needed for device simulation. A comprehensive SPICE-based model for electro-thermal transport has also been developed to serve as a bridge between the materials and device level descriptions and the system level simulations. This prototype framework has been used to design a thermoelectric cooler for managing hot spots in the integrated circuit chips. What's more, as a byproduct of this research a suite of educational and simulation resources have been developed and deployed, on the nanoHUB.org science gateway to serve as a resource for the TE community.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., RaycomInfotech, Park Tower C, No. 2 Science Institute South Rd., Zhong Guan Cun, Haidian District... C, No. 2 Science Institute South Rd., Zhong Guan Cun, Haidian District, Beijing, China 100190 75 FR...(limited to technology for computer products or components not exceeding an adjusted peak performance (APP...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 11011 Indian IC For the “organized” sector, except for computers and related equipment: Directorate... 110011 Indian IC For computers and related electronic items: Department of Electronics, Lok Nayak Bhawan... and Exports 5, Civic Center Islamabad IC Joint Science Advisor, Ministry of Science and Technology...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 11011 Indian IC For the “organized” sector, except for computers and related equipment: Directorate... 110011 Indian IC For computers and related electronic items: Department of Electronics, Lok Nayak Bhawan... and Exports 5, Civic Center Islamabad IC Joint Science Advisor, Ministry of Science and Technology...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 11011 Indian IC For the “organized” sector, except for computers and related equipment: Directorate... 110011 Indian IC For computers and related electronic items: Department of Electronics, Lok Nayak Bhawan... and Exports 5, Civic Center Islamabad IC Joint Science Advisor, Ministry of Science and Technology...
High-End Computing Challenges in Aerospace Design and Engineering
NASA Technical Reports Server (NTRS)
Bailey, F. Ronald
2004-01-01
High-End Computing (HEC) has had significant impact on aerospace design and engineering and is poised to make even more in the future. In this paper we describe four aerospace design and engineering challenges: Digital Flight, Launch Simulation, Rocket Fuel System and Digital Astronaut. The paper discusses modeling capabilities needed for each challenge and presents projections of future near and far-term HEC computing requirements. NASA's HEC Project Columbia is described and programming strategies presented that are necessary to achieve high real performance.
Status report of the end-to-end ASKAP software system: towards early science operations
NASA Astrophysics Data System (ADS)
Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew
2016-08-01
The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.
The Use of Computer Software to Teach High Technology Skills to Vocational Students.
ERIC Educational Resources Information Center
Farmer, Edgar I.
A study examined the type of computer software that is best suited to teach high technology skills to vocational students. During the study, 50 manufacturers of computer software and hardware were sent questionnaires designed to gather data concerning their recommendations in regard to: software to teach high technology skills to vocational…
Mass storage: The key to success in high performance computing
NASA Technical Reports Server (NTRS)
Lee, Richard R.
1993-01-01
There are numerous High Performance Computing & Communications Initiatives in the world today. All are determined to help solve some 'Grand Challenges' type of problem, but each appears to be dominated by the pursuit of higher and higher levels of CPU performance and interconnection bandwidth as the approach to success, without any regard to the impact of Mass Storage. My colleagues and I at Data Storage Technologies believe that all will have their performance against their goals ultimately measured by their ability to efficiently store and retrieve the 'deluge of data' created by end-users who will be using these systems to solve Scientific Grand Challenges problems, and that the issue of Mass Storage will become then the determinant of success or failure in achieving each projects goals. In today's world of High Performance Computing and Communications (HPCC), the critical path to success in solving problems can only be traveled by designing and implementing Mass Storage Systems capable of storing and manipulating the truly 'massive' amounts of data associated with solving these challenges. Within my presentation I will explore this critical issue and hypothesize solutions to this problem.
NASA Astrophysics Data System (ADS)
Ramamurthy, M.
2005-12-01
A revolution is underway in the role played by cyberinfrastructure and data services in the conduct of research and education. We live in an era of an unprecedented data volume from diverse sources, multidisciplinary analysis and synthesis, and active, learner-centered education emphasis. For example, modern remote-sensing systems like hyperspectral satellite instruments generate terabytes of data each day. Environmental problems such as global change and water cycle transcend disciplinary as well as geographic boundaries, and their solution requires integrated earth system science approaches. Contemporary education strategies recommend adopting an Earth system science approach for teaching the geosciences, employing new pedagogical techniques such as enquiry-based learning and hands-on activities. Needless to add, today's education and research enterprise depends heavily on robust, flexible and scalable cyberinfrastructure, especially on the ready availability of quality data and appropriate tools to manipulate and integrate those data. Fortuitously, rapid advances in computing and communication technologies have also revolutionized how data, tools and services are being incorporated into the teaching and scientific enterprise. The exponential growth in the use of the Internet in education and research, largely due to the advent of the World Wide Web, is by now well documented. On the other hand, how some of the other technological and community trends that have shaped the use of cyberinfrastructure, especially data services, is less well understood. For example, the computing industry is converging on an approach called Web services that enables a standard and yet revolutionary way of building applications and methods to connect and exchange information over the Web. This new approach, based on XML - a widely accepted format for exchanging data and corresponding semantics over the Internet - enables applications, computer systems, and information processes to work together in a fundamentally different way. Likewise, the advent of digital libraries, grid computing platforms, interoperable frameworks, standards and protocols, open-source software, and community atmospheric models have been important drivers in shaping the use of a new generation of end-to-end cyberinfrastructure for solving some of the most challenging scientific and educational problems. In this talk, I will present an overview of the scientific, technological, and educational drivers and discuss recent developments in cyberinfrastructure and Unidata's role and directions in providing robust, end-to-end data services for solving geoscientific problems and advancing student learning.
Interactive Voice/Web Response System in clinical research
Ruikar, Vrishabhsagar
2016-01-01
Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc. PMID:26952178
Interactive Voice/Web Response System in clinical research.
Ruikar, Vrishabhsagar
2016-01-01
Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc.
NASA GRC Stirling Technology Development Overview
NASA Technical Reports Server (NTRS)
Thieme, Lanny G.; Schreiber, Jeffrey G.
2003-01-01
The Department of Energy, Lockheed Martin (LM), Stirling Technology Company, and NASA Glenn Research Center (GRC) are developing a high-efficiency Stirling Radioisotope Generator (SRG) for potential NASA Space Science missions. The SRG is being developed for multimission use, including providing spacecraft onboard electric power for NASA deep space missions and power for unmanned Mars rovers. NASA GRC is conducting an in- house supporting technology project to assist in developing the Stirling convertor for space qualification and mission implementation. Preparations are underway for a thermalhacuum system demonstration and unattended operation during endurance testing of the 55-We Technology Demonstration Convertors. Heater head life assessment efforts continue, including verification of the heater head brazing and heat treatment schedules and evaluation of any potential regenerator oxidation. Long-term magnet aging tests are continuing to characterize any possible aging in the strength or demagnetization resistance of the permanent magnets used in the linear alternator. Testing of the magnet/lamination epoxy bond for performance and lifetime characteristics is now underway. These efforts are expected to provide key inputs as the system integrator, LM, begins system development of the SRG. GRC is also developing advanced technology for Stirling convertors. Cleveland State University (CSU) is progressing toward a multi-dimensional Stirling computational fluid dynamics code, capable of modeling complete convertors. Validation efforts at both CSU and the University of Minnesota are complementing the code development. New efforts have been started this year on a lightweight convertor, advanced controllers, high-temperature materials, and an end-to-end system dynamics model. Performance and mass improvement goals have been established for second- and third-generation Stirling radioisotope power systems.
Analysis, Mining and Visualization Service at NCSA
NASA Astrophysics Data System (ADS)
Wilhelmson, R.; Cox, D.; Welge, M.
2004-12-01
NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.
Baldwin, Constance D; Niebuhr, Virginia N; Sullivan, Brian
2004-01-01
We aimed to identify the evolving computer technology needs and interests of community faculty in order to design an effective faculty development program focused on computer skills: the Teaching and Learning Through Educational Technology (TeLeTET) program. Repeated surveys were conducted between 1994 and 2002 to assess computer resources and needs in a pool of over 800 primary care physician-educators in community practice in East Texas. Based on the results, we developed and evaluated several models to teach community preceptors about computer technologies that are useful for education. Before 1998, only half of our community faculty identified a strong interest in developing their technology skills. As the revolution in telecommunications advanced, however, preceptors' needs and interests changed, and the use of this technology to support community-based teaching became feasible. In 1998 and 1999, resource surveys showed that many of our community teaching sites had computers and Internet access. By 2001, the desire for teletechnology skills development was strong in a nucleus of community faculty, although lack of infrastructure, time, and skills were identified barriers. The TeLeTET project developed several innovative models for technology workshops and conferences, supplemented by online resources, that were well attended and positively evaluated by 181 community faculty over a 3-year period. We have identified the evolving needs of community faculty through iterative needs assessments, developed a flexible faculty development curriculum, and used open-ended, formative evaluation techniques to keep the TeLeTET program responsive to a rapidly changing environment for community-based education in computer technology.
A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments.
Thomas, Brian L; Crandall, Aaron S; Cook, Diane J
2016-04-01
Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care.
A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments
Thomas, Brian L.; Crandall, Aaron S.; Cook, Diane J.
2016-01-01
Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care. PMID:27453810
A Research Program in Computer Technology. 1982 Annual Technical Report
1983-03-01
for the Defense Advanced Research Projects Agency. The research applies computer science and technology to areas of high DoD/ military impact. The ISI...implement the plan; New Computing Environment - investigation and adaptation of developing computer technologies to serve the research and military ...Computing Environment - ,.*_i;.;"’.)n and adaptation of developing computer technologies to serve the research and military tser communities; and Computer
[Earth Science Technology Office's Computational Technologies Project
NASA Technical Reports Server (NTRS)
Fischer, James (Technical Monitor); Merkey, Phillip
2005-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
Software beamforming: comparison between a phased array and synthetic transmit aperture.
Li, Yen-Feng; Li, Pai-Chi
2011-04-01
The data-transfer and computation requirements are compared between software-based beamforming using a phased array (PA) and a synthetic transmit aperture (STA). The advantages of a software-based architecture are reduced system complexity and lower hardware cost. Although this architecture can be implemented using commercial CPUs or GPUs, the high computation and data-transfer requirements limit its real-time beamforming performance. In particular, transferring the raw rf data from the front-end subsystem to the software back-end remains challenging with current state-of-the-art electronics technologies, which offset the cost advantage of the software back end. This study investigated the tradeoff between the data-transfer and computation requirements. Two beamforming methods based on a PA and STA, respectively, were used: the former requires a higher data transfer rate and the latter requires more memory operations. The beamformers were implemente;d in an NVIDIA GeForce GTX 260 GPU and an Intel core i7 920 CPU. The frame rate of PA beamforming was 42 fps with a 128-element array transducer, with 2048 samples per firing and 189 beams per image (with a 95 MB/frame data-transfer requirement). The frame rate of STA beamforming was 40 fps with 16 firings per image (with an 8 MB/frame data-transfer requirement). Both approaches achieved real-time beamforming performance but each had its own bottleneck. On the one hand, the required data-transfer speed was considerably reduced in STA beamforming, whereas this required more memory operations, which limited the overall computation time. The advantages of the GPU approach over the CPU approach were clearly demonstrated.
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.
2009-05-01
It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.; Herner, K.; Jayatilaka, B.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
Foundational Tools for Petascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-05-19
The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building toolsmore » and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.« less
Data preservation at the Fermilab Tevatron
Boyd, J.; Herner, K.; Jayatilaka, B.; ...
2015-12-23
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
GPU Particle Tracking and MHD Simulations with Greatly Enhanced Computational Speed
NASA Astrophysics Data System (ADS)
Ziemba, T.; O'Donnell, D.; Carscadden, J.; Cash, M.; Winglee, R.; Harnett, E.
2008-12-01
GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for less cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU, and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. 3-D particle tracking and MHD codes have been developed using NVIDIA's CUDA and have demonstrated speed up of nearly a factor of 20 over equivalent CPU versions of the codes. Such a speed up enables new applications to develop, including real time running of radiation belt simulations and real time running of global magnetospheric simulations, both of which could provide important space weather prediction tools.
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.
2015-12-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.
ERIC Educational Resources Information Center
Murugaiah, Puvaneswary
2016-01-01
In computer-assisted language learning (CALL), technological tools are often used both as an end and as a means to an end (Levy & Stockwell, 2006). Microsoft PowerPoint is an example of the latter as it is commonly used in oral presentations in classrooms. However, many student presentations are often boring as students generally read from…
Using Neural Net Technology To Enhance the Efficiency of a Computer Adaptive Testing Application.
ERIC Educational Resources Information Center
Van Nelson, C.; Henriksen, Larry W.
The potential for computer adaptive testing (CAT) has been well documented. In order to improve the efficiency of this process, it may be possible to utilize a neural network, or more specifically, a back propagation neural network. The paper asserts that in order to accomplish this end, it must be shown that grouping examinees by ability as…
NASA Technical Reports Server (NTRS)
1976-01-01
Technologies required to support the stated OAST thrust to increase information return by X1000, while reducing costs by a factor of 10 are identified. The most significant driver is the need for an overall end-to-end data system management technology. Maximum use of LSI component technology and trade-offs between hardware and software are manifest in most all considerations of technology needs. By far, the greatest need for data handling technology was identified for the space Exploration and Global Services themes. Major advances are needed in NASA's ability to provide cost effective mass reduction of space data, and automated assessment of earth looking imagery, with a concomitant reduction in cost per useful bit. A combined approach embodying end-to-end system analysis, with onboard data set selection, onboard data processing, highly parallel image processing (both ground and space), low cost, high capacity memories, and low cost user data distribution systems would be necessary.
Atlas2 Cloud: a framework for personal genome analysis in the cloud
2012-01-01
Background Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. Results We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. Conclusions We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms. PMID:23134663
Atlas2 Cloud: a framework for personal genome analysis in the cloud.
Evani, Uday S; Challis, Danny; Yu, Jin; Jackson, Andrew R; Paithankar, Sameer; Bainbridge, Matthew N; Jakkamsetti, Adinarayana; Pham, Peter; Coarfa, Cristian; Milosavljevic, Aleksandar; Yu, Fuli
2012-01-01
Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms.
NASA Astrophysics Data System (ADS)
Burnett, W.
2016-12-01
The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.
Stephenson, Aoife; McDonough, Suzanne M; Murphy, Marie H; Nugent, Chris D; Mair, Jacqueline L
2017-08-11
High levels of sedentary behaviour (SB) are associated with negative health consequences. Technology enhanced solutions such as mobile applications, activity monitors, prompting software, texts, emails and websites are being harnessed to reduce SB. The aim of this paper is to evaluate the effectiveness of such technology enhanced interventions aimed at reducing SB in healthy adults and to examine the behaviour change techniques (BCTs) used. Five electronic databases were searched to identify randomised-controlled trials (RCTs), published up to June 2016. Interventions using computer, mobile or wearable technologies to facilitate a reduction in SB, using a measure of sedentary time as an outcome, were eligible for inclusion. Risk of bias was assessed using the Cochrane Collaboration's tool and interventions were coded using the BCT Taxonomy (v1). Meta-analysis of 15/17 RCTs suggested that computer, mobile and wearable technology tools resulted in a mean reduction of -41.28 min per day (min/day) of sitting time (95% CI -60.99, -21.58, I2 = 77%, n = 1402), in favour of the intervention group at end point follow-up. The pooled effects showed mean reductions at short (≤ 3 months), medium (>3 to 6 months), and long-term follow-up (>6 months) of -42.42 min/day, -37.23 min/day and -1.65 min/day, respectively. Overall, 16/17 studies were deemed as having a high or unclear risk of bias, and 1/17 was judged to be at a low risk of bias. A total of 46 BCTs (14 unique) were coded for the computer, mobile and wearable components of the interventions. The most frequently coded were "prompts and cues", "self-monitoring of behaviour", "social support (unspecified)" and "goal setting (behaviour)". Interventions using computer, mobile and wearable technologies can be effective in reducing SB. Effectiveness appeared most prominent in the short-term and lessened over time. A range of BCTs have been implemented in these interventions. Future studies need to improve reporting of BCTs within interventions and address the methodological flaws identified within the review through the use of more rigorously controlled study designs with longer-term follow-ups, objective measures of SB and the incorporation of strategies to reduce attrition. The review protocol was registered with PROSPERO: CRD42016038187.
Taherian, Sarvnaz; Selitskiy, Dmitry; Pau, James; Claire Davies, T
2017-02-01
Using a commercial electroencephalography (EEG)-based brain-computer interface (BCI), the training and testing protocol for six individuals with spastic quadriplegic cerebral palsy (GMFCS and MACS IV and V) was evaluated. A customised, gamified training paradigm was employed. Over three weeks, the participants spent two sessions exploring the system, and up to six sessions playing the game which focussed on EEG feedback of left and right arm motor imagery. The participants showed variable inconclusive results in the ability to produce two distinct EEG patterns. Participant performance was influenced by physical illness, motivation, fatigue and concentration. The results from this case study highlight the infancy of BCIs as a form of assistive technology for people with cerebral palsy. Existing commercial BCIs are not designed according to the needs of end-users. Implications for Rehabilitation Mood, fatigue, physical illness and motivation influence the usability of a brain-computer interface. Commercial brain-computer interfaces are not designed for practical assistive technology use for people with cerebral palsy. Practical brain-computer interface assistive technologies may need to be flexible to suit individual needs.
Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith
2009-01-01
This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
Overview of Human-Centric Space Situational Awareness Science and Technology
2012-09-01
AGI), the developers of Satellite Tool Kit ( STK ), has provided demonstrations of innovative SSA visualization concepts that take advantage of the...needs inherent with SSA. RH has conducted CTAs and developed work-centered human-computer interfaces, visualizations , and collaboration technologies...all end users. RH’s Battlespace Visualization Branch researches methods to exploit the visual channel primarily to improve decision making and
Building a Semantic Framework for eScience
NASA Astrophysics Data System (ADS)
Movva, S.; Ramachandran, R.; Maskey, M.; Li, X.
2009-12-01
The e-Science vision focuses on the use of advanced computing technologies to support scientists. Recent research efforts in this area have focused primarily on “enabling” use of infrastructure resources for both data and computational access especially in Geosciences. One of the existing gaps in the existing e-Science efforts has been the failure to incorporate stable semantic technologies within the design process itself. In this presentation, we describe our effort in designing a framework for e-Science built using Service Oriented Architecture. Our framework provides users capabilities to create science workflows and mine distributed data. Our e-Science framework is being designed around a mass market tool to promote reusability across many projects. Semantics is an integral part of this framework and our design goal is to leverage the latest stable semantic technologies. The use of these stable semantic technologies will provide the users of our framework the useful features such as: allow search engines to find their content with RDFa tags; create RDF triple data store for their content; create RDF end points to share with others; and semantically mash their content with other online content available as RDF end point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, K.W.; Scott, K.P.
2000-11-01
Since the 2020 Vision project began in 1996, students from participating schools have completed and submitted a variety of scenarios describing potential world and regional conditions in the year 2020 and their possible effect on US national security. This report summarizes the students' views and describes trends observed over the course of the 2020 Vision project's five years. It also highlights the main organizational features of the project. An analysis of thematic trends among the scenarios showed interesting shifts in students' thinking, particularly in their views of computer technology, US relations with China, and globalization. In 1996, most students perceivedmore » computer technology as highly beneficial to society, but as the year 2000 approached, this technology was viewed with fear and suspicion, even personified as a malicious, uncontrollable being. Yet, after New Year's passed with little disruption, students generally again perceived computer technology as beneficial. Also in 1996, students tended to see US relations with China as potentially positive, with economic interaction proving favorable to both countries. By 2000, this view had transformed into a perception of China emerging as the US' main rival and ''enemy'' in the global geopolitical realm. Regarding globalization, students in the first two years of the project tended to perceive world events as dependent on US action. However, by the end of the project, they saw the US as having little control over world events and therefore, we Americans would need to cooperate and compromise with other nations in order to maintain our own well-being.« less
Civil propulsion technology for the next twenty-five years
NASA Technical Reports Server (NTRS)
Rosen, Robert; Facey, John R.
1987-01-01
The next twenty-five years will see major advances in civil propulsion technology that will result in completely new aircraft systems for domestic, international, commuter and high-speed transports. These aircraft will include advanced aerodynamic, structural, and avionic technologies resulting in major new system capabilities and economic improvements. Propulsion technologies will include high-speed turboprops in the near term, very high bypass ratio turbofans, high efficiency small engines and advanced cycles utilizing high temperature materials for high-speed propulsion. Key fundamental enabling technologies include increased temperature capability and advanced design methods. Increased temperature capability will be based on improved composite materials such as metal matrix, intermetallics, ceramics, and carbon/carbon as well as advanced heat transfer techniques. Advanced design methods will make use of advances in internal computational fluid mechanics, reacting flow computation, computational structural mechanics and computational chemistry. The combination of advanced enabling technologies, new propulsion concepts and advanced control approaches will provide major improvements in civil aircraft.
The Role of Networks in Cloud Computing
NASA Astrophysics Data System (ADS)
Lin, Geng; Devine, Mac
The confluence of technology advancements and business developments in Broadband Internet, Web services, computing systems, and application software over the past decade has created a perfect storm for cloud computing. The "cloud model" of delivering and consuming IT functions as services is poised to fundamentally transform the IT industry and rebalance the inter-relationships among end users, enterprise IT, software companies, and the service providers in the IT ecosystem (Armbrust et al., 2009; Lin, Fu, Zhu, & Dasmalchi, 2009).
New Directions in Space Operations Services in Support of Interplanetary Exploration
NASA Technical Reports Server (NTRS)
Bradford, Robert N.
2005-01-01
To gain access to the necessary operational processes and data in support of NASA's Lunar/Mars Exploration Initiative, new services, adequate levels of computing cycles and access to myriad forms of data must be provided to onboard spacecraft and ground based personnel/systems (earth, lunar and Martian) to enable interplanetary exploration by humans. These systems, cycles and access to vast amounts of development, test and operational data will be required to provide a new level of services not currently available to existing spacecraft, on board crews and other operational personnel. Although current voice, video and data systems in support of current space based operations has been adequate, new highly reliable and autonomous processes and services will be necessary for future space exploration activities. These services will range from the more mundane voice in LEO to voice in interplanetary travel which because of the high latencies will require new voice processes and standards. New services, like component failure predictions based on data mining of significant quantities of data, located at disparate locations, will be required. 3D or holographic representation of onboard components, systems or family members will greatly improve maintenance, operations and service restoration not to mention crew morale. Current operational systems and standards, like the Internet Protocol, will not able to provide the level of service required end to end from an end point on the Martian surface like a scientific instrument to a researcher at a university. Ground operations whether earth, lunar or Martian and in flight operations to the moon and especially to Mars will require significant autonomy that will require access to highly reliable processing capabilities, data storage based on network storage technologies. Significant processing cycles will be needed onboard but could be borrowed from other locations either ground based or onboard other spacecraft. Reliability will be a key factor with onboard and distributed backup processing an absolutely necessary requirement. Current cluster processing/Grid technologies may provide the basis for providing these services. An overview of existing services, future services that will be required and the technologies and standards required to be developed will be presented. The purpose of this paper will be to initiate a technological roadmap, albeit at a high level, of current voice, video, data and network technologies and standards (which show promise for adaptation or evolution) to what technologies and standards need to be redefined, adjusted or areas where new ones require development. The roadmap should begin the differentiation between non manned and manned processes/services where applicable. The paper will be based in part on the activities of the CCSDS Monitor and Control working group which is beginning the process of standardization of the these processes. Another element of the paper will be based on an analysis of current technologies supporting space flight processes and services at JSC, MSFC, GSFC and to a lesser extent at KSC. Work being accomplished in areas such as Grid computing, data mining and network storage at ARC, IBM and the University of Alabama at Huntsville will be researched and analyzed.
2004-07-01
steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate
Health Information Technology as a Universal Donor to Bioethics Education.
Goodman, Kenneth W
2017-04-01
Health information technology, sometimes called biomedical informatics, is the use of computers and networks in the health professions. This technology has become widespread, from electronic health records to decision support tools to patient access through personal health records. These computational and information-based tools have engendered their own ethics literature and now present an opportunity to shape the standard medical and nursing ethics curricula. It is suggested that each of four core components in the professional education of clinicians-privacy, end-of-life care, access to healthcare and valid consent, and clinician-patient communication-offers an opportunity to leverage health information technology for curricular improvement. Using informatics in ethics education freshens ethics pedagogy and increases its utility, and does so without additional demands on overburdened curricula.
Computers and Data Processing. Subject Bibliography.
ERIC Educational Resources Information Center
United States Government Printing Office, Washington, DC.
This annotated bibliography of U.S. Government publications contains over 90 entries on topics including telecommunications standards, U.S. competitiveness in high technology industries, computer-related crimes, capacity management of information technology systems, the application of computer technology in the Soviet Union, computers and…
New Technologies for the Diagnosis of Sleep Apnea.
Alshaer, Hisham
2016-01-01
Sleep Apnea is a very common condition that has serious cardiovascular sequelae such as hypertension, heart failure, and stroke. Since the advent of modern computers and digital circuits, several streams of new technologies have been introduced to enhance the traditional diagnostic method of polysomnography and offer alternatives that are more accessible, comfortable, and economic. The categories presented in this review include portable polygraphy, mattress-like devices, remote sensing, and acoustic technologies. These innovations are classified as a function of their physical structure and the capabilities of their sensing technologies, due to the importance of these factors in determining the end-user experiences (both patients and medical professionals). Each of those categories offers unique strengths, which then make them particularly suitable for specific applications and end users. To our knowledge, this is a unique approach in presenting and classifying sleep apnea diagnostic innovations.
Integration of Modelling and Graphics to Create an Infrared Signal Processing Test Bed
NASA Astrophysics Data System (ADS)
Sethi, H. R.; Ralph, John E.
1989-03-01
The work reported in this paper was carried out as part of a contract with MoD (PE) UK. It considers the problems associated with realistic modelling of a passive infrared system in an operational environment. Ideally all aspects of the system and environment should be integrated into a complete end-to-end simulation but in the past limited computing power has prevented this. Recent developments in workstation technology and the increasing availability of parallel processing techniques makes the end-to-end simulation possible. However the complexity and speed of such simulations means difficulties for the operator in controlling the software and understanding the results. These difficulties can be greatly reduced by providing an extremely user friendly interface and a very flexible, high power, high resolution colour graphics capability. Most system modelling is based on separate software simulation of the individual components of the system itself and its environment. These component models may have their own characteristic inbuilt assumptions and approximations, may be written in the language favoured by the originator and may have a wide variety of input and output conventions and requirements. The models and their limitations need to be matched to the range of conditions appropriate to the operational scenerio. A comprehensive set of data bases needs to be generated by the component models and these data bases must be made readily available to the investigator. Performance measures need to be defined and displayed in some convenient graphics form. Some options are presented for combining available hardware and software to create an environment within which the models can be integrated, and which provide the required man-machine interface, graphics and computing power. The impact of massively parallel processing and artificial intelligence will be discussed. Parallel processing will make real time end-to-end simulation possible and will greatly improve the graphical visualisation of the model output data. Artificial intelligence should help to enhance the man-machine interface.
Algorithm for fast event parameters estimation on GEM acquired data
NASA Astrophysics Data System (ADS)
Linczuk, Paweł; Krawczyk, Rafał D.; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Chernyshova, Maryna; Czarski, Tomasz
2016-09-01
We present study of a software-hardware environment for developing fast computation with high throughput and low latency methods, which can be used as back-end in High Energy Physics (HEP) and other High Performance Computing (HPC) systems, based on high amount of input from electronic sensor based front-end. There is a parallelization possibilities discussion and testing on Intel HPC solutions with consideration of applications with Gas Electron Multiplier (GEM) measurement systems presented in this paper.
PVT: An Efficient Computational Procedure to Speed up Next-generation Sequence Analysis
2014-01-01
Background High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat’s serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. Results We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during ‘spliced alignment’ and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. Conclusions PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system. PMID:24894600
PVT: an efficient computational procedure to speed up next-generation sequence analysis.
Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur
2014-06-04
High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system.
Promoting High-Performance Computing and Communications. A CBO Study.
ERIC Educational Resources Information Center
Webre, Philip
In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…
Investigation of Vocational High-School Students' Computer Anxiety
ERIC Educational Resources Information Center
Tuncer, Murat; Dogan, Yunus; Tanas, Ramazan
2013-01-01
With the advent of the computer technologies, we are increasingly encountering these technologies in every field of life. The fact that the computer technology is so much interwoven with the daily life makes it necessary to investigate certain psychological attitudes of those working with computers towards computers. As this study is limited to…
Embedded Web Technology: Applying World Wide Web Standards to Embedded Systems
NASA Technical Reports Server (NTRS)
Ponyik, Joseph G.; York, David W.
2002-01-01
Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.
Milestones on the road to independence for the blind
NASA Astrophysics Data System (ADS)
Reed, Kenneth
1997-02-01
Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badgett, W.
The CDF Collider Detector at Fermilab ceased data collection on September 30, 2011 after over twenty-five years of operation. We review the performance of the CDF Run II data acquisition systems over the last ten of these years while recording nearly 10 inverse femtobarns of proton-antiproton collisions with a high degree of efficiency - exceeding 83%. Technology choices in the online control and configuration systems and front-end embedded processing have impacted the efficiency and quality of the data accumulated by CDF, and have had to perform over a large range of instantaneous luminosity values and trigger rates. We identify significantmore » sources of problems and successes. In particular, we present our experience computing and acquiring data in a radiation environment, and attempt to correlate system technical faults with radiation dose rate and technology choices.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2007-01-09
The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.
Assessment of brain-machine interfaces from the perspective of people with paralysis.
Blabe, Christine H; Gilja, Vikash; Chestek, Cindy A; Shenoy, Krishna V; Anderson, Kim D; Henderson, Jaimie M
2015-08-01
One of the main goals of brain-machine interface (BMI) research is to restore function to people with paralysis. Currently, multiple BMI design features are being investigated, based on various input modalities (externally applied and surgically implantable sensors) and output modalities (e.g. control of computer systems, prosthetic arms, and functional electrical stimulation systems). While these technologies may eventually provide some level of benefit, they each carry associated burdens for end-users. We sought to assess the attitudes of people with paralysis toward using various technologies to achieve particular benefits, given the burdens currently associated with the use of each system. We designed and distributed a technology survey to determine the level of benefit necessary for people with tetraplegia due to spinal cord injury to consider using different technologies, given the burdens currently associated with them. The survey queried user preferences for 8 BMI technologies including electroencephalography, electrocorticography, and intracortical microelectrode arrays, as well as a commercially available eye tracking system for comparison. Participants used a 5-point scale to rate their likelihood to adopt these technologies for 13 potential control capabilities. Survey respondents were most likely to adopt BMI technology to restore some of their natural upper extremity function, including restoration of hand grasp and/or some degree of natural arm movement. High speed typing and control of a fast robot arm were also of interest to this population. Surgically implanted wireless technologies were twice as 'likely' to be adopted as their wired equivalents. Assessing end-user preferences is an essential prerequisite to the design and implementation of any assistive technology. The results of this survey suggest that people with tetraplegia would adopt an unobtrusive, autonomous BMI system for both restoration of upper extremity function and control of external devices such as communication interfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Xiaoqing; Deng, Z. T.
2009-11-10
This is the final report for the Department of Energy (DOE) project DE-FG02-06ER25746, entitled, "Continuing High Performance Computing Research and Education at AAMU". This three-year project was started in August 15, 2006, and it was ended in August 14, 2009. The objective of this project was to enhance high performance computing research and education capabilities at Alabama A&M University (AAMU), and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. AAMU has successfully completed all the proposed research and educational tasks. Through the support of DOE, AAMU was able tomore » provide opportunities to minority students through summer interns and DOE computational science scholarship program. In the past three years, AAMU (1). Supported three graduate research assistants in image processing for hypersonic shockwave control experiment and in computational science related area; (2). Recruited and provided full financial support for six AAMU undergraduate summer research interns to participate Research Alliance in Math and Science (RAMS) program at Oak Ridge National Lab (ORNL); (3). Awarded highly competitive 30 DOE High Performance Computing Scholarships ($1500 each) to qualified top AAMU undergraduate students in science and engineering majors; (4). Improved high performance computing laboratory at AAMU with the addition of three high performance Linux workstations; (5). Conducted image analysis for electromagnetic shockwave control experiment and computation of shockwave interactions to verify the design and operation of AAMU-Supersonic wind tunnel. The high performance computing research and education activities at AAMU created great impact to minority students. As praised by Accreditation Board for Engineering and Technology (ABET) in 2009, ?The work on high performance computing that is funded by the Department of Energy provides scholarships to undergraduate students as computational science scholars. This is a wonderful opportunity to recruit under-represented students.? Three ASEE papers were published in 2007, 2008 and 2009 proceedings of ASEE Annual Conferences, respectively. Presentations of these papers were also made at the ASEE Annual Conferences. It is very critical to continue the research and education activities.« less
Using speech recognition to enhance the Tongue Drive System functionality in computer access.
Huo, Xueliang; Ghovanloo, Maysam
2011-01-01
Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing.
DKIST Adaptive Optics System: Simulation Results
NASA Astrophysics Data System (ADS)
Marino, Jose; Schmidt, Dirk
2016-05-01
The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.
National research and education network
NASA Technical Reports Server (NTRS)
Villasenor, Tony
1991-01-01
Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.
Storm, J.B.
2004-01-01
The U.S. Geological Survey is computing continuous discharge of the Pearl River at the upper end of the Ross Barnett Reservoir near Jackson, Mississippi, using acoustic technology and conventional streamgaging methods. The computed inflow is posted "real-time" to the Mississippi District's web page where it can be monitored by the Pearl River Valley Water Supply District (PRVWSD) to aid in reservoir regulation. The use of this technology to determine discharge allows the PRVWSD to prepare for headwater flooding conditions ahead of time and adjust reservoir outflow accordingly. Hydraulic and acoustic problems inherent to this site have presented problems not normally encountered at a typical streamgaging site. Copyright ASCE 2004.
Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data
NASA Astrophysics Data System (ADS)
Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.
2016-12-01
Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.
The change in critical technologies for computational physics
NASA Technical Reports Server (NTRS)
Watson, Val
1990-01-01
It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.
NASA Astrophysics Data System (ADS)
Nkere, Nsidi
A qualitative case study was conducted by examining the perceptions of fifth-grade African American girls about their experiences with science, technology, engineering and mathematics (STEM) education and potential for STEM as a future career. As the United States suffers from waning participation across all demographics in STEM and a high level of underrepresentation of African American women in STEM, the proposed study examined data collected through open-ended interviews with fifth-grade African American girls to explore how their current experiences and perceptions might relate to the underrepresentation of African American women in the STEM fields. Participants were selected from Miracle Elementary School (pseudonym), and consisted of all five students in a small class of high-achieving fifth-grade girls. Data were collected through in-class observations and open-ended interviews, and were analyzed using computer content analysis. The most important key results threaded through the data were related to the importance and role of the teacher, the importance of math to students, the role of experimentation and discovery, and hands-on and personal experience. Future studies are encouraged to utilize longitudinal design to follow students from elementary to university level in an effort to develop and understand the perception, persistence, and experience of all girls in STEM programs.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.
2015-01-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.
Translational bioinformatics in the cloud: an affordable alternative
2010-01-01
With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073
Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications
2011-05-31
microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters
High-Performance Computing and Visualization | Energy Systems Integration
Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system
Recent Trends in Spintronics-Based Nanomagnetic Logic
NASA Astrophysics Data System (ADS)
Das, Jayita; Alam, Syed M.; Bhanja, Sanjukta
2014-09-01
With the growing concerns of standby power in sub-100-nm CMOS technologies, alternative computing techniques and memory technologies are explored. Spin transfer torque magnetoresistive RAM (STT-MRAM) is one such nonvolatile memory relying on magnetic tunnel junctions (MTJs) to store information. It uses spin transfer torque to write information and magnetoresistance to read information. In 2012, Everspin Technologies, Inc. commercialized the first 64Mbit Spin Torque MRAM. On the computing end, nanomagnetic logic (NML) is a promising technique with zero leakage and high data retention. In 2000, Cowburn and Welland first demonstrated its potential in logic and information propagation through magnetostatic interaction in a chain of single domain circular nanomagnetic dots of Supermalloy (Ni80Fe14Mo5X1, X is other metals). In 2006, Imre et al. demonstrated wires and majority gates followed by coplanar cross wire systems demonstration in 2010 by Pulecio et al. Since 2004 researchers have also investigated the potential of MTJs in logic. More recently with dipolar coupling between MTJs demonstrated in 2012, logic-in-memory architecture with STT-MRAM have been investigated. The architecture borrows the computing concept from NML and read and write style from MRAM. The architecture can switch its operation between logic and memory modes with clock as classifier. Further through logic partitioning between MTJ and CMOS plane, a significant performance boost has been observed in basic computing blocks within the architecture. In this work, we have explored the developments in NML, in MTJs and more recent developments in hybrid MTJ/CMOS logic-in-memory architecture and its unique logic partitioning capability.
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Hayden, Jeffrey L.
2005-01-01
For human and robotic exploration missions in the Vision for Exploration, roadmaps are needed for capability development and investments based on advanced technology developments. A roadmap development process was undertaken for the needed communications, and networking capabilities and technologies for the future human and robotics missions. The underlying processes are derived from work carried out during development of the future space communications architecture, an d NASA's Space Architect Office (SAO) defined formats and structures for accumulating data. Interrelationships were established among emerging requirements, the capability analysis and technology status, and performance data. After developing an architectural communications and networking framework structured around the assumed needs for human and robotic exploration, in the vicinity of Earth, Moon, along the path to Mars, and in the vicinity of Mars, information was gathered from expert participants. This information was used to identify the capabilities expected from the new infrastructure and the technological gaps in the way of obtaining them. We define realistic, long-term space communication architectures based on emerging needs and translate the needs into interfaces, functions, and computer processing that will be required. In developing our roadmapping process, we defined requirements for achieving end-to-end activities that will be carried out by future NASA human and robotic missions. This paper describes: 10 the architectural framework developed for analysis; 2) our approach to gathering and analyzing data from NASA, industry, and academia; 3) an outline of the technology research to be done, including milestones for technology research and demonstrations with timelines; and 4) the technology roadmaps themselves.
Drajsajtl, Tomáš; Struk, Petr; Bednárová, Alice
2013-01-01
AsTeRICS - "The Assistive Technology Rapid Integration & Construction Set" is a construction set for assistive technologies which can be adapted to the motor abilities of end-users. AsTeRICS allows access to different devices such as PCs, cell phones and smart home devices, with all of them integrated in a platform adapted as much as possible to each user. People with motor disabilities in the upper limbs, with no cognitive impairment, no perceptual limitations (neither visual nor auditory) and with basic skills in using technologies such as PCs, cell phones, electronic agendas, etc. have available a flexible and adaptable technology which enables them to access the Human-Machine-Interfaces (HMI) on the standard desktop and beyond. AsTeRICS provides graphical model design tools, a middleware and hardware support for the creation of tailored AT-solutions involving bioelectric signal acquisition, Brain-/Neural Computer Interfaces, Computer-Vision techniques and standardized actuator and device controls and allows combining several off-the-shelf AT-devices in every desired combination. Novel, end-user ready solutions can be created and adapted via a graphical editor without additional programming efforts. The AsTeRICS open-source framework provides resources for utilization and extension of the system to developers and researches. AsTeRICS was developed by the AsTeRICS project and was partially funded by EC.
Closed-loop dialog model of face-to-face communication with a photo-real virtual human
NASA Astrophysics Data System (ADS)
Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás
2004-01-01
We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.
Final Project Report. Scalable fault tolerance runtime technology for petascale computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamoorthy, Sriram; Sadayappan, P
With the massive number of components comprising the forthcoming petascale computer systems, hardware failures will be routinely encountered during execution of large-scale applications. Due to the multidisciplinary, multiresolution, and multiscale nature of scientific problems that drive the demand for high end systems, applications place increasingly differing demands on the system resources: disk, network, memory, and CPU. In addition to MPI, future applications are expected to use advanced programming models such as those developed under the DARPA HPCS program as well as existing global address space programming models such as Global Arrays, UPC, and Co-Array Fortran. While there has been amore » considerable amount of work in fault tolerant MPI with a number of strategies and extensions for fault tolerance proposed, virtually none of advanced models proposed for emerging petascale systems is currently fault aware. To achieve fault tolerance, development of underlying runtime and OS technologies able to scale to petascale level is needed. This project has evaluated range of runtime techniques for fault tolerance for advanced programming models.« less
NASA Technical Reports Server (NTRS)
Chen, Yongkang; Weislogel, Mark; Schaeffer, Ben; Semerjian, Ben; Yang, Lihong; Zimmerli, Gregory
2012-01-01
The mathematical theory of capillary surfaces has developed steadily over the centuries, but it was not until the last few decades that new technologies have put a more urgent demand on a substantially more qualitative and quantitative understanding of phenomena relating to capillarity in general. So far, the new theory development successfully predicts the behavior of capillary surfaces for special cases. However, an efficient quantitative mathematical prediction of capillary phenomena related to the shape and stability of geometrically complex equilibrium capillary surfaces remains a significant challenge. As one of many numerical tools, the open-source Surface Evolver (SE) algorithm has played an important role over the last two decades. The current effort was undertaken to provide a front-end to enhance the accessibility of SE for the purposes of design and analysis. Like SE, the new code is open-source and will remain under development for the foreseeable future. The ultimate goal of the current Surface Evolver Fluid Interface Tool (SEFIT) development is to build a fully integrated front-end with a set of graphical user interface (GUI) elements. Such a front-end enables the access to functionalities that are developed along with the GUIs to deal with pre-processing, convergence computation operation, and post-processing. In other words, SE-FIT is not just a GUI front-end, but an integrated environment that can perform sophisticated computational tasks, e.g. importing industry standard file formats and employing parameter sweep functions, which are both lacking in SE, and require minimal interaction by the user. These functions are created using a mixture of Visual Basic and the SE script language. These form the foundation for a high-performance front-end that substantially simplifies use without sacrificing the proven capabilities of SE. The real power of SE-FIT lies in its automated pre-processing, pre-defined geometries, convergence computation operation, computational diagnostic tools, and crash-handling capabilities to sustain extensive computations. SE-FIT performance is enabled by its so-called file-layer mechanism. During the early stages of SE-FIT development, it became necessary to modify the original SE code to enable capabilities required for an enhanced and synchronized communication. To this end, a file-layer was created that serves as a command buffer to ensure a continuous and sequential execution of commands sent from the front-end to SE. It also establishes a proper means for handling crashes. The file layer logs input commands and SE output; it also supports user interruption requests, back and forward operation (i.e. undo and redo), and others. It especially enables the batch mode computation of a series of equilibrium surfaces and the searching of critical parameter values in studying the stability of capillary surfaces. In this way, the modified SE significantly extends the capabilities of the original SE.
Semantic Repositories for eGovernment Initiatives: Integrating Knowledge and Services
NASA Astrophysics Data System (ADS)
Palmonari, Matteo; Viscusi, Gianluigi
In recent years, public sector investments in eGovernment initiatives have depended on making more reliable existing governmental ICT systems and infrastructures. Furthermore, we assist at a change in the focus of public sector management, from the disaggregation, competition and performance measurements typical of the New Public Management (NPM), to new models of governance, aiming for the reintegration of services under a new perspective in bureaucracy, namely a holistic approach to policy making which exploits the extensive digitalization of administrative operations. In this scenario, major challenges are related to support effective access to information both at the front-end level, by means of highly modular and customizable content provision, and at the back-end level, by means of information integration initiatives. Repositories of information about data and services that exploit semantic models and technologies can support these goals by bridging the gap between the data-level representations and the human-level knowledge involved in accessing information and in searching for services. Moreover, semantic repository technologies can reach a new level of automation for different tasks involved in interoperability programs, both related to data integration techniques and service-oriented computing approaches. In this chapter, we discuss the above topics by referring to techniques and experiences where repositories based on conceptual models and ontologies are used at different levels in eGovernment initiatives: at the back-end level to produce a comprehensive view of the information managed in the public administrations' (PA) information systems, and at the front-end level to support effective service delivery.
Waghmare, Lalitbhushan S; Jagzape, Arunita T; Rawekar, Alka T; Quazi, Nazli Z; Mishra, Ved Prakash
2014-01-01
Background: Higher education has undergone profound transformation due to recent technological advancements. Resultantly health profession students have a strong base to utilize information technology for their professional development. Studies over recent past reflect a striking change in pattern of technology usage amongst medical students expanding prospects exponentially by e-books, science apps, readymade power-point presentations, evidence based medicine, Wikipedia, etc. Aim & Objectives: The study was undertaken with an aim to explore the general perceptions of medical students and faculties about the role of Information Communication Technology in higher education and to gauge student’s dependence on the same for seeking knowledge and information. Study Design: Cross-sectional, mixed research design. Materials and Methods: The study was conducted in Department of Physiology, Datta Meghe Institute of Medical Sciences (Deemed University). Study population included students (n=150) and teaching faculty (n=10) of Ist phase of medical curriculum. The survey questionnaire (10 closed ended and 5 open ended items) and Focus group discussion (FGD) captured the perceptions and attitudes of students and faculties respectively regarding the role and relevance of technology in higher education. Observations and Results: Quantitative analysis of closed ended responses was done by percentage distribution and Qualitative analysis of open ended responses and FGD excerpts was done by coding and observing the trends and patterns respectively. Overall the observations were in favour of increasing usability and dependability on technology as ready reference tool of subject information. Learners valued text books and technology almost equally and regarded computer training as a desirable incorporation in medical curriculum. Conclusion: Role of technology in education should be anticipated and appropriate measures should be undertaken for its adequate and optimum utilization by proper training of students as well as facilitators. PMID:25121049
Srivastava, Tripti K; Waghmare, Lalitbhushan S; Jagzape, Arunita T; Rawekar, Alka T; Quazi, Nazli Z; Mishra, Ved Prakash
2014-06-01
Higher education has undergone profound transformation due to recent technological advancements. Resultantly health profession students have a strong base to utilize information technology for their professional development. Studies over recent past reflect a striking change in pattern of technology usage amongst medical students expanding prospects exponentially by e-books, science apps, readymade power-point presentations, evidence based medicine, Wikipedia, etc. Aim & Objectives: The study was undertaken with an aim to explore the general perceptions of medical students and faculties about the role of Information Communication Technology in higher education and to gauge student's dependence on the same for seeking knowledge and information. Cross-sectional, mixed research design. The study was conducted in Department of Physiology, Datta Meghe Institute of Medical Sciences (Deemed University). Study population included students (n=150) and teaching faculty (n=10) of I(st) phase of medical curriculum. The survey questionnaire (10 closed ended and 5 open ended items) and Focus group discussion (FGD) captured the perceptions and attitudes of students and faculties respectively regarding the role and relevance of technology in higher education. Quantitative analysis of closed ended responses was done by percentage distribution and Qualitative analysis of open ended responses and FGD excerpts was done by coding and observing the trends and patterns respectively. Overall the observations were in favour of increasing usability and dependability on technology as ready reference tool of subject information. Learners valued text books and technology almost equally and regarded computer training as a desirable incorporation in medical curriculum. Role of technology in education should be anticipated and appropriate measures should be undertaken for its adequate and optimum utilization by proper training of students as well as facilitators.
Proceedings from the conference on high speed computing: High speed computing and national security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirons, K.P.; Vigil, M.; Carlson, R.
1997-07-01
This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.
Assessing Creative Problem-Solving with Automated Text Grading
ERIC Educational Resources Information Center
Wang, Hao-Chuan; Chang, Chun-Yen; Li, Tsai-Yen
2008-01-01
The work aims to improve the assessment of creative problem-solving in science education by employing language technologies and computational-statistical machine learning methods to grade students' natural language responses automatically. To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit…
Security Aspects of Computer Supported Collaborative Work
1993-09-01
unstructured tasks at one end 11 and prescriptive tasks at the other. Unstructured tasks are those requiring creative input from a number of users and...collaborative technology begun to mature, it has begun to outstrip prevailing management attitudes. One barrier to telecommuting is the perception that
High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center
NASA Astrophysics Data System (ADS)
Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.
2015-12-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
High-Speed On-Board Data Processing for Science Instruments: HOPS
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey
2015-01-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
The future of computing--new architectures and new technologies.
Warren, P
2004-02-01
All modern computers are designed using the 'von Neumann' architecture and built using silicon transistor technology. Both architecture and technology have been remarkably successful. Yet there are a range of problems for which this conventional architecture is not particularly well adapted, and new architectures are being proposed to solve these problems, in particular based on insight from nature. Transistor technology has enjoyed 50 years of continuing progress. However, the laws of physics dictate that within a relatively short time period this progress will come to an end. New technologies, based on molecular and biological sciences as well as quantum physics, are vying to replace silicon, or at least coexist with it and extend its capability. The paper describes these novel architectures and technologies, places them in the context of the kinds of problems they might help to solve, and predicts their possible manner and time of adoption. Finally it describes some key questions and research problems associated with their use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amerio, S.; Behari, S.; Boyd, J.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
A Primer on Infectious Disease Bacterial Genomics
Petkau, Aaron; Knox, Natalie; Graham, Morag; Van Domselaar, Gary
2016-01-01
SUMMARY The number of large-scale genomics projects is increasing due to the availability of affordable high-throughput sequencing (HTS) technologies. The use of HTS for bacterial infectious disease research is attractive because one whole-genome sequencing (WGS) run can replace multiple assays for bacterial typing, molecular epidemiology investigations, and more in-depth pathogenomic studies. The computational resources and bioinformatics expertise required to accommodate and analyze the large amounts of data pose new challenges for researchers embarking on genomics projects for the first time. Here, we present a comprehensive overview of a bacterial genomics projects from beginning to end, with a particular focus on the planning and computational requirements for HTS data, and provide a general understanding of the analytical concepts to develop a workflow that will meet the objectives and goals of HTS projects. PMID:28590251
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Amerio, S.; Behari, S.; Boyd, J.; Brochmann, M.; Culbertson, R.; Diesburg, M.; Freeman, J.; Garren, L.; Greenlee, H.; Herner, K.; Illingworth, R.; Jayatilaka, B.; Jonckheere, A.; Li, Q.; Naymola, S.; Oleynik, G.; Sakumoto, W.; Varnes, E.; Vellidis, C.; Watts, G.; White, S.
2017-04-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.
ERIC Educational Resources Information Center
Villano, Matt
2008-01-01
In the early days of computer technology, few, if any, school districts had chief information officers (CIOs). Information Technology (IT) was handled by computer or technology coordinators, many of whom were classroom teachers with passing interests in computers and associated high-tech gadgets and gizmos. As districts began embracing CIOs, the…
A New Look at NASA: Strategic Research In Information Technology
NASA Technical Reports Server (NTRS)
Alfano, David; Tu, Eugene (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
NASA Astrophysics Data System (ADS)
Gianotti, F.; Tacchini, A.; Leto, G.; Martinetti, E.; Bruno, P.; Bellassai, G.; Conforti, V.; Gallozzi, S.; Mastropietro, M.; Tanci, C.; Malaguti, G.; Trifoglio, M.
2016-08-01
The Cherenkov Telescope Array (CTA) represents the next generation of ground-based observatories for very high energy gamma-ray astronomy. The CTA will consist of two arrays at two different sites, one in the northern and one in the southern hemisphere. The current CTA design foresees, in the southern site, the installation of many tens of imaging atmospheric Cherenkov telescopes of three different classes, namely large, medium and small, so defined in relation to their mirror area; the northern hemisphere array would consist of few tens of the two larger telescope types. The Italian National Institute for Astrophysics (INAF) is developing the Cherenkov Small Size Telescope ASTRI SST- 2M end-to-end prototype telescope within the framework of the International Cherenkov Telescope Array (CTA) project. The ASTRI prototype has been installed at the INAF observing station located in Serra La Nave on Mt. Etna, Italy. Furthermore a mini-array, composed of nine of ASTRI telescopes, has been proposed to be installed at the Southern CTA site. Among the several different infrastructures belonging the ASTRI project, the Information and Communication Technology (ICT) equipment is dedicated to operations of computing and data storage, as well as the control of the entire telescope, and it is designed to achieve the maximum efficiency for all performance requirements. Thus a complete and stand-alone computer centre has been designed and implemented. The goal is to obtain optimal ICT equipment, with an adequate level of redundancy, that might be scaled up for the ASTRI mini-array, taking into account the necessary control, monitor and alarm system requirements. In this contribution we present the ICT equipment currently installed at the Serra La Nave observing station where the ASTRI SST-2M prototype will be operated. The computer centre and the control room are described with particular emphasis on the Local Area Network scheme, the computing and data storage system, and the telescope control and monitoring.
ERIC Educational Resources Information Center
Newby, Gregory B.
Information technologies such as computer mediated communication (CMC), virtual reality, and telepresence can provide the communication flow required by high-speed management techniques that high-technology industries have adopted in response to changes in the climate of competition. Intra-corporate CMC might be used for a variety of purposes…
ERIC Educational Resources Information Center
Groff, Warren H.
As our society evolves from an industrial society to a computer literate, high technology, information society, educational planners must reexamine the role of postsecondary education in economic development and in intellectual capital formation. In response to this need, a task force on high technology was established to examine the following…
Missile signal processing common computer architecture for rapid technology upgrade
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul
2004-10-01
Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.
A high-speed DAQ framework for future high-level trigger and event building clusters
NASA Astrophysics Data System (ADS)
Caselle, M.; Ardila Perez, L. E.; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.
2017-03-01
Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using "DirectGMA (AMD)" and "GPUDirect (NVIDIA)" technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
This hearing explores how the High Performance Computing and Communications Program (HPCC) relates to the technology needs of industry. Testimony and prepared statements from the following witnesses on future effects of computing and networking technologies on their companies are included: (1) F. Brett Berlin, president, Brett Berlin Associates,…
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Kübler, Andrea; Holz, Elisa M; Riccio, Angela; Zickler, Claudia; Kaufmann, Tobias; Kleih, Sonja C; Staiger-Sälzer, Pit; Desideri, Lorenzo; Hoogerwerf, Evert-Jan; Mattia, Donatella
2014-01-01
Albeit research on brain-computer interfaces (BCI) for controlling applications has expanded tremendously, we still face a translational gap when bringing BCI to end-users. To bridge this gap, we adapted the user-centered design (UCD) to BCI research and development which implies a shift from focusing on single aspects, such as accuracy and information transfer rate (ITR), to a more holistic user experience. The UCD implements an iterative process between end-users and developers based on a valid evaluation procedure. Within the UCD framework usability of a device can be defined with regard to its effectiveness, efficiency, and satisfaction. We operationalized these aspects to evaluate BCI-controlled applications. Effectiveness was regarded equivalent to accuracy of selections and efficiency to the amount of information transferred per time unit and the effort invested (workload). Satisfaction was assessed with questionnaires and visual-analogue scales. These metrics have been successfully applied to several BCI-controlled applications for communication and entertainment, which were evaluated by end-users with severe motor impairment. Results of four studies, involving a total of N = 19 end-users revealed: effectiveness was moderate to high; efficiency in terms of ITR was low to high and workload low to medium; depending on the match between user and technology, and type of application satisfaction was moderate to high. The here suggested evaluation metrics within the framework of the UCD proved to be an applicable and informative approach to evaluate BCI controlled applications, and end-users with severe impairment and in the locked-in state were able to participate in this process.
Kübler, Andrea; Holz, Elisa M.; Riccio, Angela; Zickler, Claudia; Kaufmann, Tobias; Kleih, Sonja C.; Staiger-Sälzer, Pit; Desideri, Lorenzo; Hoogerwerf, Evert-Jan; Mattia, Donatella
2014-01-01
Albeit research on brain-computer interfaces (BCI) for controlling applications has expanded tremendously, we still face a translational gap when bringing BCI to end-users. To bridge this gap, we adapted the user-centered design (UCD) to BCI research and development which implies a shift from focusing on single aspects, such as accuracy and information transfer rate (ITR), to a more holistic user experience. The UCD implements an iterative process between end-users and developers based on a valid evaluation procedure. Within the UCD framework usability of a device can be defined with regard to its effectiveness, efficiency, and satisfaction. We operationalized these aspects to evaluate BCI-controlled applications. Effectiveness was regarded equivalent to accuracy of selections and efficiency to the amount of information transferred per time unit and the effort invested (workload). Satisfaction was assessed with questionnaires and visual-analogue scales. These metrics have been successfully applied to several BCI-controlled applications for communication and entertainment, which were evaluated by end-users with severe motor impairment. Results of four studies, involving a total of N = 19 end-users revealed: effectiveness was moderate to high; efficiency in terms of ITR was low to high and workload low to medium; depending on the match between user and technology, and type of application satisfaction was moderate to high. The here suggested evaluation metrics within the framework of the UCD proved to be an applicable and informative approach to evaluate BCI controlled applications, and end-users with severe impairment and in the locked-in state were able to participate in this process. PMID:25469774
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
This hearing focused on H. R. 656, companion bill of S. 272, which calls for high performance computing legislation. This is one of several initiatives to provide for a coordinated federal research program to ensure continued U.S. leadership in high performance computing. The bill authorizes the development of a National Research and Education…
The Computer Industry. High Technology Industries: Profiles and Outlooks.
ERIC Educational Resources Information Center
International Trade Administration (DOC), Washington, DC.
A series of meetings was held to assess future problems in United States high technology, particularly in the fields of robotics, computers, semiconductors, and telecommunications. This report, which focuses on the computer industry, includes a profile of this industry and the papers presented by industry speakers during the meetings. The profile…
X-38 Experimental Controls Laws
NASA Technical Reports Server (NTRS)
Munday, Steve; Estes, Jay; Bordano, Aldo J.
2000-01-01
X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.
High-Performance Computing Systems and Operations | Computational Science |
NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate
Leeb, Robert; Perdikis, Serafeim; Tonin, Luca; Biasiucci, Andrea; Tavella, Michele; Creatura, Marco; Molina, Alberto; Al-Khodairy, Abdul; Carlson, Tom; Millán, José D R
2013-10-01
Brain-computer interfaces (BCIs) are no longer only used by healthy participants under controlled conditions in laboratory environments, but also by patients and end-users, controlling applications in their homes or clinics, without the BCI experts around. But are the technology and the field mature enough for this? Especially the successful operation of applications - like text entry systems or assistive mobility devices such as tele-presence robots - requires a good level of BCI control. How much training is needed to achieve such a level? Is it possible to train naïve end-users in 10 days to successfully control such applications? In this work, we report our experiences of training 24 motor-disabled participants at rehabilitation clinics or at the end-users' homes, without BCI experts present. We also share the lessons that we have learned through transferring BCI technologies from the lab to the user's home or clinics. The most important outcome is that 50% of the participants achieved good BCI performance and could successfully control the applications (tele-presence robot and text-entry system). In the case of the tele-presence robot the participants achieved an average performance ratio of 0.87 (max. 0.97) and for the text entry application a mean of 0.93 (max. 1.0). The lessons learned and the gathered user feedback range from pure BCI problems (technical and handling), to common communication issues among the different people involved, and issues encountered while controlling the applications. The points raised in this paper are very widely applicable and we anticipate that they might be faced similarly by other groups, if they move on to bringing the BCI technology to the end-user, to home environments and towards application prototype control. Copyright © 2013 Elsevier B.V. All rights reserved.
Seeing beyond Computer Science and Software Engineering
NASA Astrophysics Data System (ADS)
Nori, Kesav Vithal
The boundaries of computer science are defined by what symbolic computation can accomplish. Software Engineering is concerned with effective use of computing technology to support automatic computation on a large scale so as to construct desirable solutions to worthwhile problems. Both focus on what happens within the machine. In contrast, most practical applications of computing support end-users in realizing (often unsaid) objectives. It is often said that such objectives cannot be even specified, e.g., what is the specification of MS Word, or for that matter, any flavour of UNIX? This situation points to the need for architecting what people do with computers. Based on Systems Thinking and Cybernetics, we present such a viewpoint which hinges on Human Responsibility and means of living up to it.
Wideband monolithically integrated front-end subsystems and components
NASA Astrophysics Data System (ADS)
Mruk, Joseph Rene
This thesis presents the analysis, design, and measurements of passive, monolithically integrated, wideband recta-coax and printed circuit board front-end components. Monolithic fabrication of antennas, impedance transformers, filters, and transitions lowers manufacturing costs by reducing assembly time and enhances performance by removing connectors and cabling between the devices. Computational design, fabrication, and measurements are used to demonstrate the capabilities of these front-end assemblies. Two-arm wideband planar log-periodic antennas fed using a horizontal feed that allows for filters and impedance transformers to be readily fabricated within the radiating region of the antenna are demonstrated. At microwave frequencies, low-cost printed circuit board processes are typically used to produce planar devices. A 1.8 to 11 GHz two-arm planar log-periodic antenna is designed with a monolithically integrated impedance transformer. Band rejection methods based on modifying the antenna aperture, use of an integrated filter, and the application of both methods are investigated with realized gain suppressions of over 25 dB achieved. The ability of standard circuit board technology to fabricate millimeter-wave devices up to 110 GHz is severely limited. Thin dielectrics are required to prevent the excitation of higher order modes in the microstrip substrate. Fabricating the thin line widths required for the antenna aperture also becomes prohibitively challenging. Surface micro-machining typically used in the fabrication of MEMS devices is capable of producing the extremely small features that can be used to fabricate antennas extending through W-band. A directly RF fed 18 to 110 GHz planar log-periodic antenna is developed. The antenna is fabricated with an integrated impedance transformer and additional transitions for measurement characterization. Singly terminated low-loss wideband millimeter-wave filters operating over V- and W- band are developed. High quality performance of an 18 to 100 GHz front-end is realized by dividing the single instantaneous antenna into two apertures operating from 18 to 50 and 50 to 100 GHz. Each channel features an impedance transformer, low-pass (low-frequency) or band-pass (high-frequency) filter, and grounded CPW launch. This dual-aperture front-end demonstrates that micromachining technology is now capable of fabricating broadband millimeter-wave components with a high degree of integration.
DOT National Transportation Integrated Search
2005-01-01
The project involves the enhancement of the statewide crash data reporting with automated collection and data capture tools. To that end the project provided funding for computer hardware and peripherals to expand the use of the national model to mor...
75 FR 77934 - Small Business Information Security Task Force
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... on them. The Task Force has until the end of 2013 to complete the report but it is hoped that the... computing technology industry itself. Mr. Aaron Berstein then volunteered to contact Microsoft to inquire into the possibility of Microsoft providing an online collaborative space software tool for use...
Using Speech Recognition to Enhance the Tongue Drive System Functionality in Computer Access
Huo, Xueliang; Ghovanloo, Maysam
2013-01-01
Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing. PMID:22255801
Takata, Munehisa; Watanabe, Go; Ohtake, Hiroshi; Ushijima, Teruaki; Yamaguchi, Shojiro; Kikuchi, Yujiro; Yamamoto, Yoshitaka
2011-05-01
This study applied a computer-controlled mechanical stapler to vascular end-to-end anastomosis to achieve an automatic aortic anastomosis between the aorta and an artificial graft. In this experimental study, we created a mechanical end-to-end anastomotic model and assessed the strength of the anastomotic site under high pressure. We used a computer-controlled circular stapler named iDrive (Power Medical Interventions, Covidien plc, Dublin, Ireland) for the anastomosis between the porcine aorta and an artificial graft. Then the mechanically stapled group (group A) and the manually sutured group (group B) were compared 10 times, and we assessed the differences at several levels of pressure. To use a mechanical stapler in vascular anastomosis, some special preparations of both the aorta and the artificial graft are necessary to narrow the open end before the procedures. To solve this problem, we established a specially designed purse-string suture for both and finally established end-to-end vascular anastomosis. The anastomosis speed of group A was statistically significantly faster than that of group B (P < .01). The group A anastomotic sites also showed significantly more tolerance to high pressure than those of group B. The computer-controlled stapling device enabled reliable anastomosis of the aorta and the artificial graft. This study showed that mechanical vascular anastomosis with the iDrive was sufficiently strong and safe relative to manual suturing. Copyright © 2011 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
2010-08-01
Consent Form 29 Appendix B. Demographics Questionnaire 35 Appendix C. NASA TLX Questionnaire 39 Appendix D. Symptom Questionnaire 41 List of Symbols...Index ( NASA - TLX ) Participants were given the NASA - TLX subjective workload rating at the end of each task (appendix C, Hart and Staveland, 1987).1 The... NASA - TLX is a multi-dimensional rating procedure that derives an overall workload score based on a weighted average of ratings on six subscales
SPring-8 beamline control system.
Ohata, T; Konishi, H; Kimura, H; Furukawa, Y; Tamasaku, K; Nakatani, T; Tanabe, T; Matsumoto, N; Ishii, M; Ishikawa, T
1998-05-01
The SPring-8 beamline control system is now taking part in the control of the insertion device (ID), front end, beam transportation channel and all interlock systems of the beamline: it will supply a highly standardized environment of apparatus control for collaborative researchers. In particular, ID operation is very important in a third-generation synchrotron light source facility. It is also very important to consider the security system because the ID is part of the storage ring and is therefore governed by the synchrotron ring control system. The progress of computer networking systems and the technology of security control require the development of a highly flexible control system. An interlock system that is independent of the control system has increased the reliability. For the beamline control system the so-called standard model concept has been adopted. VME-bus (VME) is used as the front-end control system and a UNIX workstation as the operator console. CPU boards of the VME-bus are RISC processor-based board computers operated by a LynxOS-based HP-RT real-time operating system. The workstation and the VME are linked to each other by a network, and form the distributed system. The HP 9000/700 series with HP-UX and the HP 9000/743rt series with HP-RT are used. All the controllable apparatus may be operated from any workstation.
Feasibility of Homomorphic Encryption for Sharing I2B2 Aggregate-Level Data in the Cloud
Raisaro, Jean Louis; Klann, Jeffrey G; Wagholikar, Kavishwar B; Estiri, Hossein; Hubaux, Jean-Pierre; Murphy, Shawn N
2018-01-01
The biomedical community is lagging in the adoption of cloud computing for the management of medical data. The primary obstacles are concerns about privacy and security. In this paper, we explore the feasibility of using advanced privacy-enhancing technologies in order to enable the sharing of sensitive clinical data in a public cloud. Our goal is to facilitate sharing of clinical data in the cloud by minimizing the risk of unintended leakage of sensitive clinical information. In particular, we focus on homomorphic encryption, a specific type of encryption that offers the ability to run computation on the data while the data remains encrypted. This paper demonstrates that homomorphic encryption can be used efficiently to compute aggregating queries on the ciphertexts, along with providing end-to-end confidentiality of aggregate-level data from the i2b2 data model. PMID:29888067
Feasibility of Homomorphic Encryption for Sharing I2B2 Aggregate-Level Data in the Cloud.
Raisaro, Jean Louis; Klann, Jeffrey G; Wagholikar, Kavishwar B; Estiri, Hossein; Hubaux, Jean-Pierre; Murphy, Shawn N
2018-01-01
The biomedical community is lagging in the adoption of cloud computing for the management of medical data. The primary obstacles are concerns about privacy and security. In this paper, we explore the feasibility of using advanced privacy-enhancing technologies in order to enable the sharing of sensitive clinical data in a public cloud. Our goal is to facilitate sharing of clinical data in the cloud by minimizing the risk of unintended leakage of sensitive clinical information. In particular, we focus on homomorphic encryption, a specific type of encryption that offers the ability to run computation on the data while the data remains encrypted. This paper demonstrates that homomorphic encryption can be used efficiently to compute aggregating queries on the ciphertexts, along with providing end-to-end confidentiality of aggregate-level data from the i2b2 data model.
Using NCLab-karel to improve computational thinking skill of junior high school students
NASA Astrophysics Data System (ADS)
Kusnendar, J.; Prabawa, H. W.
2018-05-01
Increasingly human interaction with technology and the increasingly complex development of digital technology world make the theme of computer science education interesting to study. Previous studies on Computer Literacy and Competency reveal that Indonesian teachers in general have fairly high computational skill, but their skill utilization are limited to some applications. This engenders limited and minimum computer-related learning for the students. On the other hand, computer science education is considered unrelated to real-world solutions. This paper attempts to address the utilization of NCLab- Karel in shaping the computational thinking in students. This computational thinking is believed to be able to making learn students about technology. Implementation of Karel utilization provides information that Karel is able to increase student interest in studying computational material, especially algorithm. Observations made during the learning process also indicate the growth and development of computing mindset in students.
High speed Infrared imaging method for observation of the fast varying temperature phenomena
NASA Astrophysics Data System (ADS)
Moghadam, Reza; Alavi, Kambiz; Yuan, Baohong
With new improvements in high-end commercial R&D camera technologies many challenges have been overcome for exploring the high-speed IR camera imaging. The core benefits of this technology is the ability to capture fast varying phenomena without image blur, acquire enough data to properly characterize dynamic energy, and increase the dynamic range without compromising the number of frames per second. This study presents a noninvasive method for determining the intensity field of a High Intensity Focused Ultrasound Device (HIFU) beam using Infrared imaging. High speed Infrared camera was placed above the tissue-mimicking material that was heated by HIFU with no other sensors present in the HIFU axial beam. A MATLAB simulation code used to perform a finite-element solution to the pressure wave propagation and heat equations within the phantom and temperature rise to the phantom was computed. Three different power levels of HIFU transducers were tested and the predicted temperature increase values were within about 25% of IR measurements. The fundamental theory and methods developed in this research can be used to detect fast varying temperature phenomena in combination with the infrared filters.
Computational Science News | Computational Science | NREL
-Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC
High-throughput sequence alignment using Graphics Processing Units
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
2007-01-01
Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356
Military application of flat panel displays in the Vetronics Technology Testbed prototype vehicle
NASA Astrophysics Data System (ADS)
Downs, Greg; Roller, Gordon; Brendle, Bruce E., Jr.; Tierney, Terrance
2000-08-01
The ground combat vehicle crew of tomorrow must be able to perform their mission more effectively and efficiently if they are to maintain dominance over ever more lethal enemy forces. Increasing performance, however, becomes even more challenging when the soldier is subject to reduced crew sizes, a never- ending requirement to adapt to ever-evolving technologies and the demand to assimilate an overwhelming array of battlefield data. This, combined with the requirement to fight with equal effectiveness at any time of the day or night in all types of weather conditions, makes it clear that this crew of tomorrow will need timely, innovative solutions to overcome this multitude of barriers if they are to achieve their objectives. To this end, the U.S. Army is pursuing advanced crew stations with human-computer interfaces that will allow the soldier to take full advantage of emerging technologies and make efficient use of the battlefield information available to him in a program entitled 'Vetronics Technology Testbed.' Two critical components of the testbed are a compliment of panoramic indirect vision displays to permit drive-by-wire and multi-function displays for managing lethality, mobility, survivability, situational awareness and command and control of the vehicle. These displays are being developed and built by Computing Devices Canada, Ltd. This paper addresses the objectives of the testbed program and the technical requirements and design of the displays.
Intelligent Microscopes: Recent And Near-Future Advances
NASA Astrophysics Data System (ADS)
Prewitt, Judith M. S.
1980-02-01
Robert Hooke conjectured about fluid circulation in plants as well as in animals in Micrographia in a passage that is equally important as a commentary on the dependence, not of technology on science, but of science on technology: It seems very probable that Nature has ... very many appropriated instruments and contrivances, whereby to bring her designs and end to pass, which 'tis not improbable but that some diligent observer, if helped with better Microscopes, may in time detect.This paper, written in the form of a scientific poem, reviews the current and near- future state-of-the-art of automated intelligent microscopes based on computer science and technology. The basic concepts of computer intelligence for cytology and histology are presented and elaborated. Limitations of commercial devices and research proto- types are examined (Dx), and remedies are suggested (Rx). The course of action pro- posed and being undertaken constitutes an original contribution toward advancing the state-of-the-science, in the hope of advancing the state-of-the-art of medicine.With rapid, contemporary advances in both science and technology, it may now be appropriate to modify Hooke's passage:It seems very probable that Nature has ... very many appropriated instruments and contrivances, whereby to bring her designs and end to pass, which 'tis not improbable but that some diligent observer, if helped with Intelligent Microscopes, may in time detect.
Rajan, Jayant V; Moura, Juliana; Gourley, Gato; Kiso, Karina; Sizilio, Alexandre; Cortez, Ana Maria; Riley, Lee W; Veras, Maria Amelia; Sarkar, Urmimala
2016-11-17
Mobile technology to support community health has surged in popularity, yet few studies have systematically examined usability of mobile platforms for this setting. We conducted a mixed-methods study of 14 community healthcare workers at a public healthcare clinic in São Paulo, Brazil. We held focus groups with community healthcare workers to elicit their ideas about a mobile health application and used this input to build a prototype app. A pre-use test survey was administered to all participants, who subsequently use-tested the app on three different devices (iPhone, iPad mini, iPad Air). Usability was assessed by objectively scored data entry errors and through a post-use focus group held to gather open-ended feedback on end-user satisfaction. All of the participants were women, ranging from 18-64 years old. A large percentage (85.7%) of participants had at least a high school education. Internet (92.8%), computer (85.7%) and cell phone (71.4%) use rates were high. Data entry error rates were also high, particularly in free text fields, ranging from 92.3 to 100%. Error rates were comparable across device type. In a post-use focus group, participants reported that they found the app easy to use and felt that its design was consistent with their vision. The participants raised several concerns, including that they did not find filling out the forms in the app to be a useful task. They also were concerned about an app potentially creating more work for them and personal security issues related to carrying a mobile device in low-income areas. In a cohort of formally educated community healthcare workers with high levels of personal computer and cell phone use, we identified no technological barriers to adapting their existing work to a mobile device based system. Transferring current data entry work into a mobile platform, however, uncovered underlying dissatisfaction with some data entry tasks. This dissatisfaction may be a more significant barrier than the data entry errors our testing revealed. Our results highlight the fact that without a deep understanding of local process to optimize usability, technology-based solutions in health may fail. Developing such an understanding must be a central component in the design of any mHealth solution in global health.
Research | Computational Science | NREL
Research Research NREL's computational science experts use advanced high-performance computing (HPC technologies, thereby accelerating the transformation of our nation's energy system. Enabling High-Impact Research NREL's computational science capabilities enable high-impact research. Some recent examples
Development of Sensors for Aerospace Applications
NASA Technical Reports Server (NTRS)
Medelius, Pedro
2005-01-01
Advances in technology have led to the availability of smaller and more accurate sensors. Computer power to process large amounts of data is no longer the prevailing issue; thus multiple and redundant sensors can be used to obtain more accurate and comprehensive measurements in a space vehicle. The successful integration and commercialization of micro- and nanotechnology for aerospace applications require that a close and interactive relationship be developed between the technology provider and the end user early in the project. Close coordination between the developers and the end users is critical since qualification for flight is time-consuming and expensive. The successful integration of micro- and nanotechnology into space vehicles requires a coordinated effort throughout the design, development, installation, and integration processes
Software-defined Radio Based Measurement Platform for Wireless Networks
Chao, I-Chun; Lee, Kang B.; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-01-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc.) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks. PMID:27891210
Software-defined Radio Based Measurement Platform for Wireless Networks.
Chao, I-Chun; Lee, Kang B; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-10-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc. ) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks.
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Rogers, David
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
Building and measuring a high performance network architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kramer, William T.C.; Toole, Timothy; Fisher, Chuck
2001-04-20
Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less
The Challenges of Adopting a Culture of Mission Command in the US Army
2015-05-23
NUMBER 6. AUTHOR(S) LTC(P) James W. Wright 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND...the development and implementation of high- end information technology creates a paradox for mission command. 15. SUBJECT TERMS Mission command...centralized control and less risk. Likewise, the development and implementation of high- end information technology creates a paradox for mission
Data preservation at the Fermilab Tevatron
Amerio, S.; Behari, S.; Boyd, J.; ...
2017-01-22
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Parallel Computing:. Some Activities in High Energy Physics
NASA Astrophysics Data System (ADS)
Willers, Ian
This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.
2007-06-01
Innovations. New York: The Free Press. Rohlfs, J. (2001). Bandwagon Effects in High-Technology Industries. Massachusetts Institute of Technology...adoption. It focuses on cost and benefit uncertainty as well as network effects applied to end- users and their organizations. Specifically, it...as network effects applied to end- users and their organizations. Specifically, it explores Department of Defense (DoD) acquisition programs
The Science and Technology of Future Space Missions
NASA Astrophysics Data System (ADS)
Bonati, A.; Fusi, R.; Longoni, F.
1999-12-01
The future space missions span over a wide range of scientific objectives. After different successful scientific missions, other international cornerstone experiments are planned to study of the evolution of the universe and of the primordial stellar systems, and our solar system. Space missions for the survey of the microwave cosmic background radiation, deep-field search in the near and mid-infrared region and planetary exploration will be carried out. Several fields are open for research and development in the space business. Three major categories can be found: detector technology in different areas, electronics, and software. At LABEN, a Finmeccanica Company, we are focusing the technologies to respond to this challenging scientific demands. Particle trackers based on silicon micro-strips supported by lightweight structures (CFRP) are studied. In the X-ray field, CCD's are investigated with pixels of very small size so as to increase the spatial resolution of the focal plane detectors. High-efficiency and higly miniaturized high-voltage power supplies are developed for detectors with an increasingly large number of phototubes. Material research is underway to study material properties at extreme temperatures. Low-temperature mechanical structures are designed for cryogenic ( 20 K) detectors in order to maintain the high precision in pointing the instrument. Miniaturization of front end electronics with low power consumption and high number of signal processing channels is investigated; silicon-based microchips (ASIC's) are designed and developed using state-of-the-art technology. Miniaturized instruments to investigate the planets surface using X-Ray and Gamma-Ray scattering techniques are developed. The data obtained from the detectors have to be processed, compressed, formatted and stored before their transmission to ground. These tasks open up additional strategic areas of development such as microprocessor-based electronics for high-speed and parallel data processing. Powerful computers with customized architectures are designed and developed. High-speed intercommunication networks are studied and tested. In parallel to the hardware research activities, software development is undertaken for several purposes: digital and video compression algorithms, payload and spacecraft control and diagnostics, scientific processing algorithms, etc. Besides, embedded Java virtual machines are studied for tele-science applications (direct link between scientist console and scientific payload). At system engineering level, the demand for spacecraft autonomy is increased for planetology missions: reliable intelligent systems that can operate for long periods of time without human intervention from ground are requested and investigated. A technologically challenging but less glamorous area of development is represented by the laboratory equipment for end-to-end testing (on ground) of payload instruments. The main fields are cryogenics, laser and X-ray optics, microwave radiometry, UV and infrared testing systems.
[Earth and Space Sciences Project Services for NASA HPCC
NASA Technical Reports Server (NTRS)
Merkey, Phillip
2002-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Trends in life science grid: from computing grid to knowledge grid.
Konagaya, Akihiko
2006-12-18
Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.
Trends in life science grid: from computing grid to knowledge grid
Konagaya, Akihiko
2006-01-01
Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community. PMID:17254294
Terascale Computing in Accelerator Science and Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Kwok
2002-08-21
We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aidedmore » engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.« less
Advanced scoring method of eco-efficiency in European cities.
Moutinho, Victor; Madaleno, Mara; Robaina, Margarita; Villar, José
2018-01-01
This paper analyzes a set of selected German and French cities' performance in terms of the relative behavior of their eco-efficiencies, computed as the ratio of their gross domestic product (GDP) over their CO 2 emissions. For this analysis, eco-efficiency scores of the selected cities are computed using the data envelopment analysis (DEA) technique, taking the eco-efficiencies as outputs, and the inputs being the energy consumption, the population density, the labor productivity, the resource productivity, and the patents per inhabitant. Once DEA results are analyzed, the Malmquist productivity indexes (MPI) are used to assess the time evolution of the technical efficiency, technological efficiency, and productivity of the cities over the window periods 2000 to 2005 and 2005 to 2008. Some of the main conclusions are that (1) most of the analyzed cities seem to have suboptimal scales, being one of the causes of their inefficiency; (2) there is evidence that high GDP over CO 2 emissions does not imply high eco-efficiency scores, meaning that DEA like approaches are useful to complement more simplistic ranking procedures, pointing out potential inefficiencies at the input levels; (3) efficiencies performed worse during the period 2000-2005 than during the period 2005-2008, suggesting the possibility of corrective actions taken during or at the end of the first period but impacting only on the second period, probably due to an increasing environmental awareness of policymakers and governors; and (4) MPI analysis shows a positive technological evolution of all cities, according to the general technological evolution of the reference cities, reflecting a generalized convergence of most cities to their technological frontier and therefore an evolution in the right direction.
High-Performance Computing User Facility | Computational Science | NREL
User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access
Computers and videodiscs in pathology education: ECLIPS as an example of one approach.
Thursh, D R; Mabry, F; Levy, A H
1986-03-01
We have enumerated ways in which the evolving computer and videodisc technologies are being used in pathology education and discussed in some detail the particular use with which we are most familiar, text management. While it is probably premature to speculate as to how these technologies will ultimately affect pathology education, one recent trend--the convergence that seems to be developing between those working on expert consulting systems and those working primarily on educational applications--will probably influence this impact substantially. We believe that we are moving, from opposite directions, toward the same end result, namely, the use of machine intelligence to facilitate and augment human learning. We expect that, as the two groups come closer together, very powerful, interesting, and eminently useful educational tools will emerge. While this is occurring, we think that most would agree that one of the very urgent needs is to develop forums in which the academic and practice communities can interact with researchers and developers. With apologies to Clemenceau, computers are rapidly becoming too important to be left exclusively to computer scientists. Such forums would serve to give these communities a chance to learn what the new technologies have to offer and give developers a better idea of where these technologies can make the greatest contributions.
Injecting Computational Thinking into Computing Activities for Middle School Girls
ERIC Educational Resources Information Center
Webb, Heidi Cornelia
2013-01-01
Advances in technology have caused high schools to update their computer science curricula; however there has been little analogous attention to technology-related education in middle schools. With respect to computer-related knowledge and skills, middle school students are at a critical phase in life, exploring individualized education options…
Granata, C; Pino, M; Legouverneur, G; Vidal, J-S; Bidaud, P; Rigaud, A-S
2013-01-01
Socially assistive robotics for elderly care is a growing field. However, although robotics has the potential to support elderly in daily tasks by offering specific services, the development of usable interfaces is still a challenge. Since several factors such as age or disease-related changes in perceptual or cognitive abilities and familiarity with computer technologies influence technology use they must be considered when designing interfaces for these users. This paper presents findings from usability testing of two different services provided by a social assistive robot intended for elderly with cognitive impairment: a grocery shopping list and an agenda application. The main goal of this study is to identify the usability problems of the robot interface for target end-users as well as to isolate the human factors that affect the use of the technology by elderly. Socio-demographic characteristics and computer experience were examined as factors that could have an influence on task performance. A group of 11 elderly persons with Mild Cognitive Impairment and a group of 11 cognitively healthy elderly individuals took part in this study. Performance measures (task completion time and number of errors) were collected. Cognitive profile, age and computer experience were found to impact task performance. Participants with cognitive impairment achieved the tasks committing more errors than cognitively healthy elderly. Instead younger participants and those with previous computer experience were faster at completing the tasks confirming previous findings in the literature. The overall results suggested that interfaces and contents of the services assessed were usable by older adults with cognitive impairment. However, some usability problems were identified and should be addressed to better meet the needs and capacities of target end-users.
Computer Courses in Higher-Education: Improving Learning by Screencast Technology
ERIC Educational Resources Information Center
Ghilay, Yaron; Ghilay, Ruth
2015-01-01
The aim of the study was to find out a method designated to improve the learning of computer courses by adding Screencast technology. The intention was to measure the influence of high-quality clips produced by Screencast technology, on the learning process of computer courses. It was required to find out the characteristics (pedagogical and…
A PCIe Gen3 based readout for the LHCb upgrade
NASA Astrophysics Data System (ADS)
Bellato, M.; Collazuol, G.; D'Antone, I.; Durante, P.; Galli, D.; Jost, B.; Lax, I.; Liu, G.; Marconi, U.; Neufeld, N.; Schwemmer, R.; Vagnoni, V.
2014-06-01
The architecture of the data acquisition system foreseen for the LHCb upgrade, to be installed by 2018, is devised to readout events trigger-less, synchronously with the LHC bunch crossing rate at 40 MHz. Within this approach the readout boards act as a bridge between the front-end electronics and the High Level Trigger (HLT) computing farm. The baseline design for the LHCb readout is an ATCA board requiring dedicated crates. A local area standard network protocol is implemented in the on-board FPGAs to read out the data. The alternative solution proposed here consists in building the readout boards as PCIe peripherals of the event-builder servers. The main architectural advantage is that protocol and link-technology of the event-builder can be left open until very late, to profit from the most cost-effective industry technology available at the time of the LHC LS2.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations
NASA Astrophysics Data System (ADS)
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos
2017-12-01
Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos
2017-12-01
The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
Capturing atmospheric effects on 3D millimeter wave radar propagation patterns
NASA Astrophysics Data System (ADS)
Cook, Richard D.; Fiorino, Steven T.; Keefer, Kevin J.; Stringer, Jeremy
2016-05-01
Traditional radar propagation modeling is done using a path transmittance with little to no input for weather and atmospheric conditions. As radar advances into the millimeter wave (MMW) regime, atmospheric effects such as attenuation and refraction become more pronounced than at traditional radar wavelengths. The DoD High Energy Laser Joint Technology Offices High Energy Laser End-to-End Operational Simulation (HELEEOS) in combination with the Laser Environmental Effects Definition and Reference (LEEDR) code have shown great promise simulating atmospheric effects on laser propagation. Indeed, the LEEDR radiative transfer code has been validated in the UV through RF. Our research attempts to apply these models to characterize the far field radar pattern in three dimensions as a signal propagates from an antenna towards a point in space. Furthermore, we do so using realistic three dimensional atmospheric profiles. The results from these simulations are compared to those from traditional radar propagation software packages. In summary, a fast running method has been investigated which can be incorporated into computational models to enhance understanding and prediction of MMW propagation through various atmospheric and weather conditions.
Coordinated Fault-Tolerance for High-Performance Computing Final Project Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panda, Dhabaleswar Kumar; Beckman, Pete
2011-07-28
With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less
Assessment of brain-machine interfaces from the perspective of people with paralysis
NASA Astrophysics Data System (ADS)
Blabe, Christine H.; Gilja, Vikash; Chestek, Cindy A.; Shenoy, Krishna V.; Anderson, Kim D.; Henderson, Jaimie M.
2015-08-01
Objective. One of the main goals of brain-machine interface (BMI) research is to restore function to people with paralysis. Currently, multiple BMI design features are being investigated, based on various input modalities (externally applied and surgically implantable sensors) and output modalities (e.g. control of computer systems, prosthetic arms, and functional electrical stimulation systems). While these technologies may eventually provide some level of benefit, they each carry associated burdens for end-users. We sought to assess the attitudes of people with paralysis toward using various technologies to achieve particular benefits, given the burdens currently associated with the use of each system. Approach. We designed and distributed a technology survey to determine the level of benefit necessary for people with tetraplegia due to spinal cord injury to consider using different technologies, given the burdens currently associated with them. The survey queried user preferences for 8 BMI technologies including electroencephalography, electrocorticography, and intracortical microelectrode arrays, as well as a commercially available eye tracking system for comparison. Participants used a 5-point scale to rate their likelihood to adopt these technologies for 13 potential control capabilities. Main Results. Survey respondents were most likely to adopt BMI technology to restore some of their natural upper extremity function, including restoration of hand grasp and/or some degree of natural arm movement. High speed typing and control of a fast robot arm were also of interest to this population. Surgically implanted wireless technologies were twice as ‘likely’ to be adopted as their wired equivalents. Significance. Assessing end-user preferences is an essential prerequisite to the design and implementation of any assistive technology. The results of this survey suggest that people with tetraplegia would adopt an unobtrusive, autonomous BMI system for both restoration of upper extremity function and control of external devices such as communication interfaces.
Lindberg, D A; Humphreys, B L
1995-01-01
The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116
The Nike Laser Facility and its Capabilities
NASA Astrophysics Data System (ADS)
Serlin, V.; Aglitskiy, Y.; Chan, L. Y.; Karasik, M.; Kehne, D. M.; Oh, J.; Obenschain, S. P.; Weaver, J. L.
2013-10-01
The Nike laser is a 56-beam krypton fluoride (KrF) system that provides 3 to 4 kJ of laser energy on target. The laser uses induced spatial incoherence to achieve highly uniform focal distributions. 44 beams are overlapped onto target with peak intensities up to 1016 W/cm2. The effective time-averaged illumination nonuniformity is < 0 . 2 %. Nike produces highly uniform ablation pressures on target allowing well-controlled experiments at pressures up to 20 Mbar. The other 12 laser beams are used to generate diagnostic x-rays radiographing the primary laser-illuminated target. The facility includes a front end that generates the desired temporal and spatial laser profiles, two electron-beam pumped KrF amplifiers, a computer-controlled optical system, and a vacuum target chamber for experiments. Nike is used to study the physics and technology issues of direct-drive laser fusion, such as, hydrodynamic and laser-plasma instabilities, studies of the response of materials to extreme pressures, and generation of X rays from laser-heated targets. Nike features a computer-controlled data acquisition system, high-speed, high-resolution x-ray and visible imaging systems, x-ray and visible spectrometers, and cryogenic target capability. Work supported by DOE/NNSA.
Integrating Technology to Maximize Learning
ERIC Educational Resources Information Center
Jones, Eric
2007-01-01
Such initiatives as one-to-one computing, laptop learning, and technology immersion are gaining momentum in middle level and high schools, but the key to their success is more than cutting-edge technology. Henrico County Public Schools, a pioneer in educational technology in Virginia, launched a one-to-one computing initiative in 2001. The…
Asynchronous transfer mode link performance over ground networks
NASA Technical Reports Server (NTRS)
Chow, E. T.; Markley, R. W.
1993-01-01
The results of an experiment to determine the feasibility of using asynchronous transfer mode (ATM) technology to support advanced spacecraft missions that require high-rate ground communications and, in particular, full-motion video are reported. Potential nodes in such a ground network include Deep Space Network (DSN) antenna stations, the Jet Propulsion Laboratory, and a set of national and international end users. The experiment simulated a lunar microrover, lunar lander, the DSN ground communications system, and distributed science users. The users were equipped with video-capable workstations. A key feature was an optical fiber link between two high-performance workstations equipped with ATM interfaces. Video was also transmitted through JPL's institutional network to a user 8 km from the experiment. Variations in video depending on the networks and computers were observed, the results are reported.
The energy performance of thermochromic glazing
NASA Astrophysics Data System (ADS)
Diamantouros, Pavlos
This study investigated the energy performance of thermochromic glazing. It was done by simulating the model of a small building in a highly advanced computer program (EnergyPlus - U.S. DOE). The physical attributes of the thermochromic samples examined came from actual laboratory samples fabricated in UCL's Department of Chemistry (Prof I. P. Parkin). It was found that they can substantially reduce cooling loads while requiring the same heating loads as a high end low-e double glazing. The reductions in annual cooling energy required were in the 20%-40% range depending on sample, location and building layout. A series of sensitivity analyses showed the importance of switching temperature and emissivity factor in the performance of the glazing. Finally an ideal pane was designed to explore the limits this technology has to offer.
Society for College Science Teachers: High Technology.
ERIC Educational Resources Information Center
Menefee, Robert
1983-01-01
Presents findings of a study group on high technology charged with determining a definition, assessing current educational response, and examining implications for the future. Topics addressed include: super-techs; computer-aided design/computer-aided manufacture (CAD/CAM); structural unemployment; a two-plus-two curriculum; and educational…
A Lightweight Protocol for Secure Video Streaming
Morkevicius, Nerijus; Bagdonas, Kazimieras
2018-01-01
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing “Fog Node-End Device” layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard. PMID:29757988
A Lightweight Protocol for Secure Video Streaming.
Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis
2018-05-14
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.
Schools Facing the Expiration of Windows XP
ERIC Educational Resources Information Center
Cavanagh, Sean
2013-01-01
Microsoft's plans to end support for Windows XP, believed to be the dominant computer operating system in K-12 education, could pose big technological and financial challenges for districts nationwide--issues that many school systems have yet to confront. The giant software company has made it clear for years that it plans to stop supporting XP…
NASA Technical Reports Server (NTRS)
1997-01-01
Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.
Mobile Edge Computing Empowers Internet of Things
NASA Astrophysics Data System (ADS)
Ansari, Nirwan; Sun, Xiang
In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in real-time. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods are validated via extensive simulations.
Efficiency improvements in US Office equipment: Expected policy impacts and uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koomey, J.G.; Cramer, M.; Piette, M.A.
This report describes a detailed end-use forecast of office equipment energy use for the US commercial sector. We explore the likely impacts of the US Environmental Protection Agency`s ENERGY STAR office equipment program and the potential impacts of advanced technologies. The ENERGY STAR program encourages manufacturers to voluntarily incorporate power saving features into personal computers, monitors, printers, copiers, and fax machines in exchange for allowing manufacturers to use the EPA ENERGY STAR logo in their advertising campaigns. The Advanced technology case assumes that the most energy efficient current technologies are implemented regardless of cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, G.; Mansur, D.L.; Ruhter, W.D.
1994-10-01
This report presents the details of the Lawrence Livermore National Laboratory safeguards and securities program. This program is focused on developing new technology, such as x- and gamma-ray spectrometry, for measurement of special nuclear materials. This program supports the Office of Safeguards and Securities in the following five areas; safeguards technology, safeguards and decision support, computer security, automated physical security, and automated visitor access control systems.
A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System
NASA Astrophysics Data System (ADS)
Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.
2010-05-01
The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.
Effect of Physical Education Teachers' Computer Literacy on Technology Use in Physical Education
ERIC Educational Resources Information Center
Kretschmann, Rolf
2015-01-01
Teachers' computer literacy has been identified as a factor that determines their technology use in class. The aim of this study was to investigate the relationship between physical education (PE) teachers' computer literacy and their technology use in PE. The study group consisted of 57 high school level in-service PE teachers. A survey was used…
Koivusilta, Leena K; Lintonen, Tomi P; Rimpelä, Arja H
2007-01-01
The role of information and communication technology (ICT) in adolescents' lives was studied, with emphasis on whether there exists a digital divide based on sociodemographic background, educational career, and health. The assumption was that some groups of adolescents use ICT more so that their information utilization skills improve (computer use), while others use it primarily for entertainment (digital gaming, contacting friends by mobile phone). Data were collected by mailed survey from a nationally representative sample of 12- to 18-year-olds (n=7,292; response 70%) in 2001 and analysed using ANOVA. Computer use was most frequent among adolescents whose fathers had higher education or socioeconomic status, who came from nuclear families, and who continued studies after compulsory education. Digital gaming was associated with poor school achievement and attending vocational rather than upper secondary school. Mobile phone use was frequent among adolescents whose fathers had lower education or socioeconomic status, who came from non-nuclear families, and whose educational prospects were poor. Intensive use of each ICT form, especially of mobile phones, was associated with health problems. High social position, nuclear family, and a successful educational career signified good health in general, independently of the diverse usage of ICT. There exists a digital divide among adolescents: orientation to computer use is more common in educated well-off families while digital gaming and mobile phone use accumulate at the opposite end of the spectrum. Poorest health was reported by mobile phone users. High social background and success at school signify better health, independently of the ways of using ICT.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
A high-performance genetic algorithm: using traveling salesman problem as a case.
Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.
Information technology and ethics: An exploratory factor analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conger, S.; Loch, K.D.; Helft, B.L.
1994-12-31
Ethical dilemmas are situations in which a decision results in unpleasant consequences. The unpleasant consequences are treated as a zero-sum game in which someone always loses. Introducing information technology (IT) to a situation makes the recognition of a potential loser more abstract and difficult to identify, thus an ethical dilemma may go unrecognized. The computer mediates the human relationship which causes a lost sense of contact with a person at the other end of the computer connection. In 1986, Richard O. Mason published an essay identifying privacy, accuracy, property, and Access (PAPA) as the four main ethical issues of themore » information age. Anecdotes for each issue describe the injured party`s perspective to identify consequences resulting from unethical use of information and information technology. This research sought to validate Mason`s social issues empirically, but with distinct differences. Mason defined issues to raise awareness and initiate debate on the need for a social agenda; our focus is on individual computer users and the attitudes they hold about ethical behavior in computer use. This study examined the attitudes of the computer user who experiences the ethical dilemma to determine the extent to which ethical components are recognized, and whether Mason`s issues form recognizable constructs.« less
CMOS cassette for digital upgrade of film-based mammography systems
NASA Astrophysics Data System (ADS)
Baysal, Mehmet A.; Toker, Emre
2006-03-01
While full-field digital mammography (FFDM) technology is gaining clinical acceptance, the overwhelming majority (96%) of the installed base of mammography systems are conventional film-screen (FSM) systems. A high performance, and economical digital cassette based product to conveniently upgrade FSM systems to FFDM would accelerate the adoption of FFDM, and make the clinical and technical advantages of FFDM available to a larger population of women. The planned FFDM cassette is based on our commercial Digital Radiography (DR) cassette for 10 cm x 10 cm field-of-view spot imaging and specimen radiography, utilizing a 150 micron columnar CsI(Tl) scintillator and 48 micron active-pixel CMOS sensor modules. Unlike a Computer Radiography (CR) cassette, which requires an external digitizer, our DR cassette transfers acquired images to a display workstation within approximately 5 seconds of exposure, greatly enhancing patient flow. We will present the physical performance of our prototype system against other FFDM systems in clinical use today, using established objective criteria such as the Modulation Transfer Function (MTF), Detective Quantum Efficiency (DQE), and subjective criteria, such as a contrast-detail (CD-MAM) observer performance study. Driven by the strong demand from the computer industry, CMOS technology is one of the lowest cost, and the most readily accessible technologies available for FFDM today. Recent popular use of CMOS imagers in high-end consumer cameras have also resulted in significant advances in the imaging performance of CMOS sensors against rivaling CCD sensors. This study promises to take advantage of these unique features to develop the first CMOS based FFDM upgrade cassette.
Advanced ECG in 2016: is there more than just a tracing?
Reichlin, Tobias; Abächerli, Roger; Twerenbold, Raphael; Kühne, Michael; Schaer, Beat; Müller, Christian; Sticherling, Christian; Osswald, Stefan
2016-01-01
The 12-lead electrocardiogram (ECG) is the most frequently used technology in clinical cardiology. It is critical for evidence-based management of patients with most cardiovascular conditions, including patients with acute myocardial infarction, suspected chronic cardiac ischaemia, cardiac arrhythmias, heart failure and implantable cardiac devices. In contrast to many other techniques in cardiology, the ECG is simple, small, mobile, universally available and cheap, and therefore particularly attractive. Standard ECG interpretation mainly relies on direct visual assessment. The progress in biomedical computing and signal processing, and the available computational power offer fascinating new options for ECG analysis relevant to all fields of cardiology. Several digital ECG markers and advanced ECG technologies have shown promise in preliminary studies. This article reviews promising novel surface ECG technologies in three different fields. (1) For the detection of myocardial ischaemia and infarction, QRS morphology feature analysis, the analysis of high frequency QRS components (HF-QRS) and methods using vectorcardiography as well as ECG imaging are discussed. (2) For the identification and management of patients with cardiac arrhythmias, methods of advanced P-wave analysis are discussed and the concept of ECG imaging for noninvasive localisation of cardiac arrhythmias is presented. (3) For risk stratification of sudden cardiac death and the selection of patients for medical device therapy, several novel markers including an automated QRS-score for scar quantification, the QRS-T angle or the T-wave peak-to-end-interval are discussed. Despite the existing preliminary data, none of the advanced ECG markers and technologies has yet accomplished the transition into clinical practice. Further refinement of these technologies and broader validation in large unselected patient cohorts are the critical next step needed to facilitate translation of advanced ECG technologies into clinical cardiology.
NASA Astrophysics Data System (ADS)
Sanford, James L.; Schlig, Eugene S.; Prache, Olivier; Dove, Derek B.; Ali, Tariq A.; Howard, Webster E.
2002-02-01
The IBM Research Division and eMagin Corp. jointly have developed a low-power VGA direct view active matrix OLED display, fabricated on a crystalline silicon CMOS chip. The display is incorporated in IBM prototype wristwatch computers running the Linus operating system. IBM designed the silicon chip and eMagin developed the organic stack and performed the back-end-of line processing and packaging. Each pixel is driven by a constant current source controlled by a CMOS RAM cell, and the display receives its data from the processor memory bus. This paper describes the OLED technology and packaging, and outlines the design of the pixel and display electronics and the processor interface. Experimental results are presented.
Program on application of communications satellites to educational development
NASA Technical Reports Server (NTRS)
Morgan, R. P.; Singh, J. P.
1971-01-01
Interdisciplinary research in needs analysis, communications technology studies, and systems synthesis is reported. Existing and planned educational telecommunications services are studied and library utilization of telecommunications is described. Preliminary estimates are presented of ranges of utilization of educational telecommunications services for 1975 and 1985; instructional and public television, computer-aided instruction, computing resources, and information resource sharing for various educational levels and purposes. Communications technology studies include transmission schemes for still-picture television, use of Gunn effect devices, and TV receiver front ends for direct satellite reception at 12 GHz. Two major studies in the systems synthesis project concern (1) organizational and administrative aspects of a large-scale instructional satellite system to be used with schools and (2) an analysis of future development of instructional television, with emphasis on the use of video tape recorders and cable television. A communications satellite system synthesis program developed for NASA is now operational on the university IBM 360-50 computer.
None
2017-12-09
Learn what it will take to create tomorrow's net-zero energy home as scientists reveal the secrets of cool roofs, smart windows, and computer-driven energy control systems. The net-zero energy home: Scientists are working to make tomorrow's homes more than just energy efficient -- they want them to be zero energy. Iain Walker, a scientist in the Lab's Energy Performance of Buildings Group, will discuss what it takes to develop net-zero energy houses that generate as much energy as they use through highly aggressive energy efficiency and on-site renewable energy generation. Talking back to the grid: Imagine programming your house to use less energy if the electricity grid is full or price are high. Mary Ann Piette, deputy director of Berkeley Lab's building technology department and director of the Lab's Demand Response Research Center, will discuss how new technologies are enabling buildings to listen to the grid and automatically change their thermostat settings or lighting loads, among other demands, in response to fluctuating electricity prices. The networked (and energy efficient) house: In the future, your home's lights, climate control devices, computers, windows, and appliances could be controlled via a sophisticated digital network. If it's plugged in, it'll be connected. Bruce Nordman, an energy scientist in Berkeley Lab's Energy End-Use Forecasting group, will discuss how he and other scientists are working to ensure these networks help homeowners save energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Learn what it will take to create tomorrow's net-zero energy home as scientists reveal the secrets of cool roofs, smart windows, and computer-driven energy control systems. The net-zero energy home: Scientists are working to make tomorrow's homes more than just energy efficient -- they want them to be zero energy. Iain Walker, a scientist in the Lab's Energy Performance of Buildings Group, will discuss what it takes to develop net-zero energy houses that generate as much energy as they use through highly aggressive energy efficiency and on-site renewable energy generation. Talking back to the grid: Imagine programming your house tomore » use less energy if the electricity grid is full or price are high. Mary Ann Piette, deputy director of Berkeley Lab's building technology department and director of the Lab's Demand Response Research Center, will discuss how new technologies are enabling buildings to listen to the grid and automatically change their thermostat settings or lighting loads, among other demands, in response to fluctuating electricity prices. The networked (and energy efficient) house: In the future, your home's lights, climate control devices, computers, windows, and appliances could be controlled via a sophisticated digital network. If it's plugged in, it'll be connected. Bruce Nordman, an energy scientist in Berkeley Lab's Energy End-Use Forecasting group, will discuss how he and other scientists are working to ensure these networks help homeowners save energy.« less
Trends in radiology and experimental research.
Sardanelli, Francesco
2017-01-01
European Radiology Experimental , the new journal launched by the European Society of Radiology, is placed in the context of three general and seven radiology-specific trends. After describing the impact of population aging, personalized/precision medicine, and information technology development, the article considers the following trends: the tension between subspecialties and the unity of the discipline; attention to patient safety; the challenge of reproducibility for quantitative imaging; standardized and structured reporting; search for higher levels of evidence in radiology (from diagnostic performance to patient outcome); the increasing relevance of interventional radiology; and continuous technological evolution. The new journal will publish not only studies on phantoms, cells, or animal models but also those describing development steps of imaging biomarkers or those exploring secondary end-points of large clinical trials. Moreover, consideration will be given to studies regarding: computer modelling and computer aided detection and diagnosis; contrast materials, tracers, and theranostics; advanced image analysis; optical, molecular, hybrid and fusion imaging; radiomics and radiogenomics; three-dimensional printing, information technology, image reconstruction and post-processing, big data analysis, teleradiology, clinical decision support systems; radiobiology; radioprotection; and physics in radiology. The journal aims to establish a forum for basic science, computer and information technology, radiology, and other medical subspecialties.
1998-12-01
Two companies have successfully commercialized a specialized welding tool developed at the Marshall Space Flight Center (MSFC). Friction stir welding uses the high rotational speed of a tool and the resulting frictional heat created from contact to crush, "stir" together, and forge a bond between two metal alloys. It has had a major drawback, reliance on a single-piece pin tool. The pin is slowly plunged into the joint between two materials to be welded and rotated as high speed. At the end of the weld, the single-piece pin tool is retracted and leaves a "keyhole," something which is unacceptable when welding cylindrical objects such as drums, pipes and storage tanks. Another drawback is the requirement for different-length pin tools when welding materials of varying thickness. An engineer at the MSFC helped design an automatic retractable pin tool that uses a computer-controlled motor to automatically retract the pin into the shoulder of the tool at the end of the weld, preventing keyholes. This design allows the pin angle and length to be adjusted for changes in material thickness and results in a smooth hole closure at the end of the weld. Benefits of friction stir welding, using the MSFC retractable pin tool technology, include the following: The ability to weld a wide range of alloys, including previously unweldable and composite materials; provision of twice the fatigue resistance of fusion welds and no keyholes; minimization of material distortion; no creation of hazards such as welding fumes, radiation, high voltage, liquid metals, or arcing; automatic retraction of the pin at the end of the weld; and maintaining full penetration of the pin.
Math and science technology access and use in South Dakota public schools grades three through five
NASA Astrophysics Data System (ADS)
Schwietert, Debra L.
The development of K-12 technology standards, soon to be added to state testing of technology proficiency, and the increasing presence of computers in homes and classrooms reflects the growing importance of technology in current society. This study examined math and science teachers' responses on a survey of technology use in grades three through five in South Dakota. A researcher-developed survey instrument was used to collect data from a random sample of 100 public schools throughout the South Dakota. Forced choice and open-ended responses were recorded. Most teachers have access to computers, but they lack resources to purchase software for their content areas, especially in science areas. Three-fourths of teachers in this study reported multiple computers in their classrooms and 67% reported access to labs in other areas of the school building. These numbers are lower than the national average of 84% of teachers with computers in their classrooms and 95% with access to computers elsewhere in the building (USDOE, 2000). Almost eight out of 10 teachers noted time as a barrier to learning more about educational software. Additional barriers included lack of school funds (38%), access to relevant training (32%), personal funds (30%), and poor quality of training (7%). Teachers most often use math and science software as supplemental, with practice tutorials cited as another common use. The most common interest for software was math for both boys and girls. The second most common choice for boys was science and for girls, language arts. Teachers reported that there was no preference for either individual or group work on computers for girls or boys. Most teachers do not systematically evaluate software for gender preferences, but review software over subjectively.
Earthdata Cloud Analytics Project
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Lynnes, Chris
2018-01-01
This presentation describes a nascent project in NASA to develop a framework to support end-user analytics of NASA's Earth science data in the cloud. The chief benefit of migrating EOSDIS (Earth Observation System Data and Information Systems) data to the cloud is to position the data next to enormous computing capacity to allow end users to process data at scale. The Earthdata Cloud Analytics project will user a service-based approach to facilitate the infusion of evolving analytics technology and the integration with non-NASA analytics or other complementary functionality at other agencies and in other nations.
NASA Astrophysics Data System (ADS)
Priest, Richard Harding
A significant percentage of high school science teachers are not using computers to teach their students or prepare them for standardized testing. A survey of high school science teachers was conducted to determine how they are having students use computers in the classroom, why science teachers are not using computers in the classroom, which variables were relevant to their not using computers, and what are the effects of standardized testing on the use of technology in the high school science classroom. A self-administered questionnaire was developed to measure these aspects of computer integration and demographic information. A follow-up telephone interview survey of a portion of the original sample was conducted in order to clarify questions, correct misunderstandings, and to draw out more holistic descriptions from the subjects. The primary method used to analyze the quantitative data was frequency distributions. Multiple regression analysis was used to investigate the relationships between the barriers and facilitators and the dimensions of instructional use, frequency, and importance of the use of computers. All high school science teachers in a large urban/suburban school district were sent surveys. A response rate of 58% resulted from two mailings of the survey. It was found that contributing factors to why science teachers do not use computers were not enough up-to-date computers in their classrooms and other educational commitments and duties do not leave them enough time to prepare lessons that include technology. While a high percentage of science teachers thought their school and district administrations were supportive of technology, they also believed more inservice technology training and follow-up activities to support that training are needed and more software needs to be created. The majority of the science teachers do not use the computer to help students prepare for standardized tests because they believe they can prepare students more efficiently without a computer. Nearly half of the teachers, however, gave lack of time to prepare instructional materials and lack of a means to project a computer image to the whole class as reasons they do not use computers. A significant percentage thought science standardized testing was having a negative effect on computer use.
Zickler, Claudia; Halder, Sebastian; Kleih, Sonja C; Herbert, Cornelia; Kübler, Andrea
2013-10-01
For many years the reestablishment of communication for people with severe motor paralysis has been in the focus of brain-computer interface (BCI) research. Recently applications for entertainment have also been developed. Brain Painting allows the user creative expression through painting pictures. The second, revised prototype of the BCI Brain Painting application was evaluated in its target function - free painting - and compared to the P300 spelling application by four end users with severe disabilities. According to the International Organization for Standardization (ISO), usability was evaluated in terms of effectiveness (accuracy), efficiency (information transfer rate (ITR)), utility metric, subjective workload (National Aeronautics and Space Administration Task Load Index (NASA TLX)) and user satisfaction (Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) 2.0 and Assistive Technology Device Predisposition Assessment (ATD PA), Device Form). The results revealed high performance levels (M≥80% accuracy) in the free painting and the copy painting conditions, ITRs (4.47-6.65bits/min) comparable to other P300 applications and only low to moderate workload levels (5-49 of 100), thereby proving that the complex task of free painting did neither impair performance nor impose insurmountable workload. Users were satisfied with the BCI Brain Painting application. Main obstacles for use in daily life were the system operability and the EEG cap, particularly the need of extensive support for adjustment. The P300 Brain Painting application can be operated with high effectiveness and efficiency. End users with severe motor paralysis would like to use the application in daily life. User-friendliness, specifically ease of use, is a mandatory necessity when bringing BCI to end users. Early and active involvement of users and iterative user-centered evaluation enable developers to work toward this goal. Copyright © 2013 Elsevier B.V. All rights reserved.
Emissive flat panel displays: A challenge to the AMLCD
NASA Astrophysics Data System (ADS)
Walko, R. J.
According to some sources, flat panel displays (FPD's) for computers will represent a 20-40 billion dollar industry by the end of the decade and could leverage up to 100-200 billion dollars in computer sales. Control of the flat panel display industry could be a significant factor in the global economy if FPD's manage to tap into the enormous audio/visual consumer market. Japan presently leads the world in active matrix liquid crystal display (AMLCD) manufacturing, the current leading FPD technology. The AMLCD is basically a light shutter which does not emit light on its own, but modulates the intensity of a separate backlight. However, other technologies, based on light emitting phosphors, could eventually challenge the AMLCD's lead position. These light-emissive technologies do not have the size, temperature and viewing angle limitations of AMLCD's. In addition, they could also be less expensive to manufacture, and require a smaller capital outlay for a manufacturing plant. An overview of these alternative technologies is presented.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.
This hearing before the Senate Subcommittee on Science, Technology, and Space focuses on S. 272, the High-Performance Computing and Communications Act of 1991, a bill that provides for a coordinated federal research and development program to ensure continued U.S. leadership in this area. Performance computing is defined as representing the…
Impact of new computing systems on computational mechanics and flight-vehicle structures technology
NASA Technical Reports Server (NTRS)
Noor, A. K.; Storaasli, O. O.; Fulton, R. E.
1984-01-01
Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.
High Performance Computing and Networking for Science--Background Paper.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Office of Technology Assessment.
The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…
NASA Astrophysics Data System (ADS)
Tekin, Tolga; Töpper, Michael; Reichl, Herbert
2009-05-01
Technological frontiers between semiconductor technology, packaging, and system design are disappearing. Scaling down geometries [1] alone does not provide improvement of performance, less power, smaller size, and lower cost. It will require "More than Moore" [2] through the tighter integration of system level components at the package level. System-in-Package (SiP) will deliver the efficient use of three dimensions (3D) through innovation in packaging and interconnect technology. A key bottleneck to the implementation of high-performance microelectronic systems, including SiP, is the lack of lowlatency, high-bandwidth, and high density off-chip interconnects. Some of the challenges in achieving high-bandwidth chip-to-chip communication using electrical interconnects include the high losses in the substrate dielectric, reflections and impedance discontinuities, and susceptibility to crosstalk [3]. Obviously, the incentive for the use of photonics to overcome the challenges and leverage low-latency and highbandwidth communication will enable the vision of optical computing within next generation architectures. Supercomputers of today offer sustained performance of more than petaflops, which can be increased by utilizing optical interconnects. Next generation computing architectures are needed with ultra low power consumption; ultra high performance with novel interconnection technologies. In this paper we will discuss a CMOS compatible underlying technology to enable next generation optical computing architectures. By introducing a new optical layer within the 3D SiP, the development of converged microsystems, deployment for next generation optical computing architecture will be leveraged.
Authentication and Authorization of End User in Microservice Architecture
NASA Astrophysics Data System (ADS)
He, Xiuyu; Yang, Xudong
2017-10-01
As the market and business continues to expand; the traditional single monolithic architecture is facing more and more challenges. The development of cloud computing and container technology promote microservice architecture became more popular. While the low coupling, fine granularity, scalability, flexibility and independence of the microservice architecture bring convenience, the inherent complexity of the distributed system make the security of microservice architecture important and difficult. This paper aims to study the authentication and authorization of the end user under the microservice architecture. By comparing with the traditional measures and researching on existing technology, this paper put forward a set of authentication and authorization strategies suitable for microservice architecture, such as distributed session, SSO solutions, client-side JSON web token and JWT + API Gateway, and summarize the advantages and disadvantages of each method.
Increasing the Cryogenic Toughness of Steels
NASA Technical Reports Server (NTRS)
Rush, H. F.
1986-01-01
Grain-refining heat treatments increase toughness without substantial strength loss. Five alloys selected for study, all at or near technological limit. Results showed clearly grain sizes of these alloys refined by such heat treatments and grain refinement results in large improvement in toughness without substantial loss in strength. Best improvements seen in HP-9-4-20 Steel, at low-strength end of technological limit, and in Maraging 200, at high-strength end. These alloys, in grain refined condition, considered for model applications in high-Reynolds-number cryogenic wind tunnels.
High performance computing and communications program
NASA Technical Reports Server (NTRS)
Holcomb, Lee
1992-01-01
A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.
Future of Assurance: Ensuring that a System is Trustworthy
NASA Astrophysics Data System (ADS)
Sadeghi, Ahmad-Reza; Verbauwhede, Ingrid; Vishik, Claire
Significant efforts are put in defining and implementing strong security measures for all components of the comput-ing environment. It is equally important to be able to evaluate the strength and robustness of these measures and establish trust among the components of the computing environment based on parameters and attributes of these elements and best practices associated with their production and deployment. Today the inventory of techniques used for security assurance and to establish trust -- audit, security-conscious development process, cryptographic components, external evaluation - is somewhat limited. These methods have their indisputable strengths and have contributed significantly to the advancement in the area of security assurance. However, shorter product and tech-nology development cycles and the sheer complexity of modern digital systems and processes have begun to decrease the efficiency of these techniques. Moreover, these approaches and technologies address only some aspects of security assurance and, for the most part, evaluate assurance in a general design rather than an instance of a product. Additionally, various components of the computing environment participating in the same processes enjoy different levels of security assurance, making it difficult to ensure adequate levels of protection end-to-end. Finally, most evaluation methodologies rely on the knowledge and skill of the evaluators, making reliable assessments of trustworthiness of a system even harder to achieve. The paper outlines some issues in security assurance that apply across the board, with the focus on the trustworthiness and authenticity of hardware components and evaluates current approaches to assurance.
Chu, Adeline; Mastel-Smith, Beth
2010-01-01
Technology has a great impact on nursing practice. With the increasing numbers of older Americans using computers and the Internet in recent years, nurses have the capability to deliver effective and efficient health education to their patients and the community. Based on the theoretical framework of Bandura's self-efficacy theory, the pilot project reported findings from a 5-week computer course on Internet health searches in older adults, 65 years or older, at a senior activity learning center. Twelve participants were recruited and randomized to either the intervention or the control group. Measures of computer anxiety, computer confidence, and computer self-efficacy scores were analyzed at baseline, at the end of the program, and 6 weeks after the completion of the program. Analysis was conducted with repeated-measures analysis of variance. Findings showed participants who attended a structured computer course on Internet health information retrieval reported lowered anxiety and increased confidence and self-efficacy at the end of the 5-week program and 6 weeks after the completion of the program as compared with participants who were not in the program. The study demonstrated that a computer course can help reduce anxiety and increase confidence and self-efficacy in online health searches in older adults.
Internal fluid mechanics research on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.
1988-01-01
The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.
High Technology in the Vocational Areas.
ERIC Educational Resources Information Center
Ogletree, Earl; Etlinger, Leonard
1984-01-01
Presents a broad overview of the technological revolution and its impact on American business and industry, focusing on electronics and computers. Argues that high technology vocational programs must become a part of high school and college curricula. (KH)
Advances and trends in computational structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1986-01-01
Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.
High performance computing and communications: Advancing the frontiers of information technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less
ERIC Educational Resources Information Center
Tweel, Abdeneaser
2012-01-01
High uncertainties related to cloud computing adoption may hinder IT managers from making solid decisions about adopting cloud computing. The problem addressed in this study was the lack of understanding of the relationship between factors related to the adoption of cloud computing and IT managers' interest in adopting this technology. In…
High-Performance Computing Data Center Warm-Water Liquid Cooling |
Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective
Intelligent Control in Automation Based on Wireless Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
2007-09-01
Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less
Intelligent Control in Automation Based on Wireless Traffic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in controlmore » type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.« less
NASA Astrophysics Data System (ADS)
Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.
2013-10-01
In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.
Lin, Chin-Teng; Ko, Li-Wei; Chang, Meng-Hsiu; Duann, Jeng-Ren; Chen, Jing-Ying; Su, Tung-Ping; Jung, Tzyy-Ping
2010-01-01
Biomedical signal monitoring systems have rapidly advanced in recent years, propelled by significant advances in electronic and information technologies. Brain-computer interface (BCI) is one of the important research branches and has become a hot topic in the study of neural engineering, rehabilitation, and brain science. Traditionally, most BCI systems use bulky, wired laboratory-oriented sensing equipments to measure brain activity under well-controlled conditions within a confined space. Using bulky sensing equipments not only is uncomfortable and inconvenient for users, but also impedes their ability to perform routine tasks in daily operational environments. Furthermore, owing to large data volumes, signal processing of BCI systems is often performed off-line using high-end personal computers, hindering the applications of BCI in real-world environments. To be practical for routine use by unconstrained, freely-moving users, BCI systems must be noninvasive, nonintrusive, lightweight and capable of online signal processing. This work reviews recent online BCI systems, focusing especially on wearable, wireless and real-time systems. Copyright 2009 S. Karger AG, Basel.
Continuous robust sound event classification using time-frequency features and deep learning
Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478
Continuous robust sound event classification using time-frequency features and deep learning.
McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy
2017-01-01
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
Space Communication Artificial Intelligence for Link Evaluation Terminal (SCAILET)
NASA Technical Reports Server (NTRS)
Shahidi, Anoosh K.; Schlegelmilch, Richard F.; Petrik, Edward J.; Walters, Jerry L.
1992-01-01
A software application to assist end-users of the high burst rate (HBR) link evaluation terminal (LET) for satellite communications is being developed. The HBR LET system developed at NASA Lewis Research Center is an element of the Advanced Communications Technology Satellite (ACTS) Project. The HBR LET is divided into seven major subsystems, each with its own expert. Programming scripts, test procedures defined by design engineers, set up the HBR LET system. These programming scripts are cryptic, hard to maintain and require a steep learning curve. These scripts were developed by the system engineers who will not be available for the end-users of the system. To increase end-user productivity a friendly interface needs to be added to the system. One possible solution is to provide the user with adequate documentation to perform the needed tasks. With the complexity of this system the vast amount of documentation needed would be overwhelming and the information would be hard to retrieve. With limited resources, maintenance is another reason for not using this form of documentation. An advanced form of interaction is being explored using current computer techniques. This application, which incorporates a combination of multimedia and artificial intelligence (AI) techniques to provided end-users with an intelligent interface to the HBR LET system, is comprised of an intelligent assistant, intelligent tutoring, and hypermedia documentation. The intelligent assistant and tutoring systems address the critical programming needs of the end-user.
Development of Igbo Language E-Learning System
ERIC Educational Resources Information Center
Oyelami, Olufemi Moses
2008-01-01
E-Learning involves using a variety of computer and networking technologies to access training materials. The United Nations report, quoted in one of the Nigerian dailies towards the end of year 2006, says that most of the minor languages in the world would be extinct by the year 2050. African languages are currently suffering from discard by…
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Housner, Jerrold M.
1993-01-01
Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.
Fager, Susan Koch; Burnfield, Judith M
2014-03-01
To understand individuals' perceptions of technology use during inpatient rehabilitation. A qualitative phenomenological study using semi-structured interviews of 10 individuals with diverse underlying diagnoses and/or a close family member who participated in inpatient rehabilitation. Core themes focused on assistive technology usage (equipment set-up, reliability and fragility of equipment, expertise required to use assistive technology and use of mainstream technologies) and opportunities for using technology to increase therapeutic engagement (opportunities for practice outside of therapy, goals for therapeutic exercises and technology for therapeutic exercises: motivation and social interaction). Interviews revealed the need for durable, reliable and intuitive technology without requiring a high level of expertise to install and implement. A strong desire for the continued use of mainstream devices (e.g. cell phones, tablet computers) reinforces the need for a wider range of access options for those with limited physical function. Finally, opportunities to engage in therapeutically meaningful activities beyond the traditional treatment hours were identified as valuable for patients to not only improve function but to also promote social interaction. Assistive technology increases functional independence of severely disabled individuals. End-users (patients and families) identified a need for designs that are durable, reliable, intuitive, easy to consistently install and use. Technology use (adaptive or commercially available) provides a mechanism to extend therapeutic practice beyond the traditional therapy day. Adapting skeletal tracking technology used in gaming software could automate exercise tracking, documentation and feedback for patient motivation and clinical treatment planning and interventions.
Toward visual user interfaces supporting collaborative multimedia content management
NASA Astrophysics Data System (ADS)
Husein, Fathi; Leissler, Martin; Hemmje, Matthias
2000-12-01
Supporting collaborative multimedia content management activities, as e.g., image and video acquisition, exploration, and access dialogues between naive users and multi media information systems is a non-trivial task. Although a wide variety of experimental and prototypical multimedia storage technologies as well as corresponding indexing and retrieval engines are available, most of them lack appropriate support for collaborative end-user oriented user interface front ends. The development of advanced user adaptable interfaces is necessary for building collaborative multimedia information- space presentations based upon advanced tools for information browsing, searching, filtering, and brokering to be applied on potentially very large and highly dynamic multimedia collections with a large number of users and user groups. Therefore, the development of advanced and at the same time adaptable and collaborative computer graphical information presentation schemes that allow to easily apply adequate visual metaphors for defined target user stereotypes has to become a key focus within ongoing research activities trying to support collaborative information work with multimedia collections.
Progress on applications of high temperature superconducting microwave filters
NASA Astrophysics Data System (ADS)
Chunguang, Li; Xu, Wang; Jia, Wang; Liang, Sun; Yusheng, He
2017-07-01
In the past two decades, various kinds of high performance high temperature superconducting (HTS) filters have been constructed and the HTS filters and their front-end subsystems have been successfully applied in many fields. The HTS filters with small insertion loss, narrow bandwidth, flat in-band group delay, deep out-of-band rejection, and steep skirt slope are reviewed. Novel HTS filter design technologies, including those in high power handling filters, multiband filters and frequency tunable filters, are reviewed, as well as the all-HTS integrated front-end receivers. The successful applications to various civilian fields, such as mobile communication, radar, deep space detection, and satellite technology, are also reviewed.
CAD/CAM. High-Technology Training Module.
ERIC Educational Resources Information Center
Zuleger, Robert
This high technology training module is an advanced course on computer-assisted design/computer-assisted manufacturing (CAD/CAM) for grades 11 and 12. This unit, to be used with students in advanced drafting courses, introduces the concept of CAD/CAM. The content outline includes the following seven sections: (1) CAD/CAM software; (2) computer…
The use of computer simulations in whole-class versus small-group settings
NASA Astrophysics Data System (ADS)
Smetana, Lara Kathleen
This study explored the use of computer simulations in a whole-class as compared to small-group setting. Specific consideration was given to the nature and impact of classroom conversations and interactions when computer simulations were incorporated into a high school chemistry course. This investigation fills a need for qualitative research that focuses on the social dimensions of actual classrooms. Participants included a novice chemistry teacher experienced in the use of educational technologies and two honors chemistry classes. The study was conducted in a rural school in the south-Atlantic United States at the end of the fall 2007 semester. The study took place during one instructional unit on atomic structure. Data collection allowed for triangulation of evidence from a variety of sources approximately 24 hours of video- and audio-taped classroom observations, supplemented with the researcher's field notes and analytic journal; miscellaneous classroom artifacts such as class notes, worksheets, and assignments; open-ended pre- and post-assessments; student exit interviews; teacher entrance, exit and informal interviews. Four web-based simulations were used, three of which were from the ExploreLearning collection. Assessments were analyzed using descriptive statistics and classroom observations, artifacts and interviews were analyzed using Erickson's (1986) guidelines for analytic induction. Conversational analysis was guided by methods outlined by Erickson (1982). Findings indicated (a) the teacher effectively incorporated simulations in both settings (b) students in both groups significantly improved their understanding of the chemistry concepts (c) there was no statistically significant difference between groups' achievement (d) there was more frequent exploratory talk in the whole-class group (e) there were more frequent and meaningful teacher-student interactions in the whole-class group (f) additional learning experiences not measured on the assessment resulted from conversations and interactions in the whole-class setting (g) the potential benefits of exploratory talk in the whole-class setting were not fully realized. These findings suggest that both whole-class and small-group settings are appropriate for using computer simulations in science. The effective incorporation of simulations into whole-class instruction may provide a solution to the dilemma of technology penetration versus integration in today's classrooms.
Ehrler, Frederic; Ducloux, Pascal; Wu, Danny T Y; Lovis, Christian; Blondon, Katherine
2018-01-01
Supporting caregivers' workflow with mobile applications (apps) is a growing trend. At the bedside, apps can provide new ways to support the documentation process rather than using a desktop computer in a nursing office. Although these applications show potential, few existing reports have studied the real impact of such solutions. At the University Hospitals of Geneva, we developed BEDside Mobility, a mobile application supporting nurses' daily workflow. In a pilot study, the app was trialed in two wards for a period of one month. We collected data of the actual usage of the app and asked the users to complete a tailored technology acceptance model questionnaire at the end of the study period. Results show that participation remain stable with time with participants using in average the tool for almost 29 minutes per day. The technology acceptance questionnaires revealed a high usability of the app and good promotion from the institution although users did not perceive any increase in productivity. Overall, intent of use was divergent between promoters and antagonist. Furthermore, some participants considered the tool as an addition to their workload. This evaluation underlines the importance of helping all end users perceive the benefits of a new intervention since coworkers strong influence each other.
Space Communications Artificial Intelligence for Link Evaluation Terminal (SCAILET)
NASA Technical Reports Server (NTRS)
Shahidi, Anoosh
1991-01-01
A software application to assis end-users of the Link Evaluation Terminal (LET) for satellite communication is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving, 220/110 Mbps capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET and ACTS are being developed at the NASA Lewis Research Center. The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. By comparing the transmitted bit pattern with the received bit pattern, HBR LET can determine the bit error rate BER) under various atmospheric conditions. An algorithm for power augmentation is applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions. Programming scripts, defined by the design engineer, set up the HBR LET terminal by programming subsystem devices through IEEE488 interfaces. However, the scripts are difficult to use, require a steep learning curve, are cryptic, and are hard to maintain. The combination of the learning curve and the complexities involved with editing the script files may discourage end-users from utilizing the full capabilities of the HBR LET system. An intelligent assistant component of SCAILET that addresses critical end-user needs in the programming of the HBR LET system as anticipated by its developers is described. A close look is taken at the various steps involved in writing ECM software for a C&P, computer and at how the intelligent assistant improves the HBR LET system and enhances the end-user's ability to perform the experiments.
White paper on science operations
NASA Technical Reports Server (NTRS)
Schreier, Ethan J.
1991-01-01
Major changes are taking place in the way astronomy gets done. There are continuing advances in observational capabilities across the frequency spectrum, involving both ground-based and space-based facilities. There is also very rapid evolution of relevant computing and data management technologies. However, although the new technologies are filtering in to the astronomy community, and astronomers are looking at their computing needs in new ways, there is little coordination or coherent policy. Furthermore, although there is great awareness of the evolving technologies in the arena of operations, much of the existing operations infrastructure is ill-suited to take advantage of them. Astronomy, especially space astronomy, has often been at the cutting edge of computer use in data reduction and image analysis, but has been somewhat removed from advanced applications in operations, which have tended to be implemented by industry rather than by the end-user scientists. The purpose of this paper is threefold. First, we briefly review the background and general status of astronomy-related computing. Second, we make recommendations in three areas: data analysis; operations (directed primarily to NASA-related activities); and issues of management and policy, believing that these must be addressed to enable technological progress and to proceed through the next decade. Finally, we recommend specific NASA-related work as part of the Astrotech-21 plans, to enable better science operations in the operations of the Great Observatories and in the lunar outpost era.
Exploring Computer Technology. The Illinois Plan for Industrial Education.
ERIC Educational Resources Information Center
Illinois State Univ., Normal.
This guide, which is one in the "Exploration" series of curriculum guides intended to assist junior high and middle school industrial educators in helping their students explore diverse industrial situations and technologies used in industry, deals with exploring computer technology. The following topics are covered in the individual…
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Computational approaches in the design of synthetic receptors - A review.
Cowen, Todd; Karim, Kal; Piletsky, Sergey
2016-09-14
The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology. Copyright © 2016 Elsevier B.V. All rights reserved.
The emerging role of cloud computing in molecular modelling.
Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W
2013-07-01
There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.
1986-01-01
Efforts to demonstrate that the dendritic web technology is ready for commercial use by the end of 1986 continues. A commercial readiness goal involves improvements to crystal growth furnace throughput to demonstrate an area growth rate of greater than 15 sq cm/min while simultaneously growing 10 meters or more of ribbon under conditions of continuous melt replenishment. Continuous means that the silicon melt is being replenished at the same rate that it is being consumed by ribbon growth so that the melt level remains constant. Efforts continue on computer thermal modeling required to define high speed, low stress, continuous growth configurations; the study of convective effects in the molten silicon and growth furnace cover gas; on furnace component modifications; on web quality assessments; and on experimental growth activities.
Digital Linear Tape (DLT) technology and product family overview
NASA Technical Reports Server (NTRS)
Lignos, Demetrios
1994-01-01
The demand that began a couple of years ago for increased data storage capacity continues. Peripheral Strategies (a Santa Barbara, California, Storage Market Research Firm) projects the amount of data stored on the average enterprise network will grow by 50 percent to 100 percent per year. Furthermore, Peripheral Strategies says that a typical mid-range workstation system containing 30GB to 50GB of storage today will grow at the rate of 50 percent per year. Dan Friedlander, a Boulder, Colorado-based consultant specializing in PC-LAN backup, says, 'The average NetWare LAN is about 8GB, but there are many that have 30GB to 300GB.....' The substantial growth of storage requirements has created various tape technologies that seek to satisfy the needs of today's and, especially, the next generations's systems and applications. There are five leading tape technologies in the market today: QIC (Quarter Inch Cartridge), IBM 3480/90, 8mm, DAT (Digital Audio Tape) and DLT (Digital Linear Tape). Product performance specifications and user needs have combined to classify these technologies into low-end, mid-range, and high-end systems applications. Although the manufacturers may try to position their products differently, product specifications and market requirements have determined that QIC and DAT are primarily low-end systems products while 8mm and DLT are competing for mid-range systems applications and the high-end systems space, where IBM compatibility is not required. The 3480/90 products seem to be used primarily in the IBM market, for interchangeability purposes. There are advantages and disadvantages for each of the tape technologies in the market today. We believe that DLT technology offers a significant number of very important features and specifications that make it extremely attractive for most current as well as emerging new applications, such as Hierarchical Storage Management (HSM). This paper will demonstrate why we think that the DLT technology and family of DLT products will become the technology of choice for most new applications in the mid-range and high-end (non-IBM) markets.
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
2014-01-01
Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.
Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo
2016-01-01
The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.
NASA Astrophysics Data System (ADS)
Waight, Noemi; Gillmeister, Kristina
2014-04-01
This study examined teachers' and students' initial conceptions of computer-based models—Flash and NetLogo models—and documented how teachers and students reconciled notions of multiple representations featuring macroscopic, submicroscopic and symbolic representations prior to actual intervention in eight high school chemistry classrooms. Individual in-depth interviews were conducted with 32 students and 6 teachers. Findings revealed an interplay of complex factors that functioned as opportunities and obstacles in the implementation of technologies in science classrooms. Students revealed preferences for the Flash models as opposed to the open-ended NetLogo models. Altogether, due to lack of content and modeling background knowledge, students experienced difficulties articulating coherent and blended understandings of multiple representations. Concurrently, while the aesthetic and interactive features of the models were of great value, they did not sustain students' initial curiosity and opportunities to improve understandings about chemistry phenomena. Most teachers recognized direct alignment of the Flash model with their existing curriculum; however, the benefits were relegated to existing procedural and passive classroom practices. The findings have implications for pedagogical approaches that address the implementation of computer-based models, function of models, models as multiple representations and the role of background knowledge and cognitive load, and the role of teacher vision and classroom practices.
Precision machining of optical surfaces with subaperture correction technologies MRF and IBF
NASA Astrophysics Data System (ADS)
Schmelzer, Olaf; Feldkamp, Roman
2015-10-01
Precision optical elements are used in a wide range of technical instrumentations. Many optical systems e.g. semiconductor inspection modules, laser heads for laser material processing or high end movie cameras, contain precision optics even aspherical or freeform surfaces. Critical parameters for such systems are wavefront error, image field curvature or scattered light. Following these demands the lens parameters are also critical concerning power and RMSi of the surface form error and micro roughness. How can we reach these requirements? The emphasis of this discussion is set on the application of subaperture correction technologies in the fabrication of high-end aspheres and free-forms. The presentation focuses on the technology chain necessary for the production of high-precision aspherical optical components and the characterization of the applied subaperture finishing tools MRF (magneto-rheological finishing) and IBF (ion beam figuring). These technologies open up the possibility of improving the performance of optical systems.
A Virtual Learning Application of the Schoolwide Enrichment Model and High-End Learning Theory
ERIC Educational Resources Information Center
Renzulli, Joseph S.; Reis, Sally M.
2012-01-01
Remarkable advances in instructional communication technology (ICT) have now made it possible to provide high levels of enrichment services to students online. This paper describes an Internet-based enrichment program based on a high-end learning theory that focuses on the development of creative productivity through the "application" of knowledge…
An Educator's Guide to High-End Videoconferencing.
ERIC Educational Resources Information Center
Maring, Gerald H.; Schmid, Jason A.; Roark, Jeremy
This document describes the origins of cybermentoring and focuses on projects with elementary and secondary schools throughout the state of Washington. It discusses use of telephone communication, email, web design, and low-end videoconferencing technologies in initial cyberprojects, and recent cyberprojects that have begun to make use of high-end…
Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.
Monitoring Precipitation from Space: targeting Hydrology Community?
NASA Astrophysics Data System (ADS)
Hong, Y.; Turk, J.
2005-12-01
During the past decades, advances in space, sensor and computer technology have made it possible to estimate precipitation nearly globally from a variety of observations in a relatively direct manner. The success of Tropical Precipitation Measuring Mission (TRMM) has been a significant advance for modern precipitation estimation algorithms to move toward daily quarter degree measurements, while the need for precipitation data at temporal-spatial resolutions compatible with hydrologic modeling has been emphasized by the end user: hydrology community. Can the future deployment of Global Precipitation Measurement constellation of low-altitude orbiting satellites (covering 90% of the global with a sampling interval of less than 3-hours), in conjunction with the existing suite of geostationary satellites, results in significant improvements in scale and accuracy of precipitation estimates suitable for hydrology applications? This presentation will review the current state of satellite-derived precipitation estimation and demonstrate the early results and primary barriers to full global high-resolution precipitation coverage. An attempt to facilitate the communication between data producers and users will be discussed by developing an 'end-to-end' uncertainty propagation analysis framework to quantify both the precipitation estimation error structure and the error influence on hydrological modeling.
ERIC Educational Resources Information Center
Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.
This report presents a review of the High Performance Computing and Communications (HPCC) Program, which has as its goal the acceleration of the commercial availability and utilization of the next generation of high performance computers and networks in order to: (1) extend U.S. technological leadership in high performance computing and computer…
ERIC Educational Resources Information Center
Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James
2010-01-01
There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…
ERIC Educational Resources Information Center
Voithofer, R. J.
Television programs are increasingly featuring information technologies like computers as significant narrative devices, including the use of computer-based technologies as virtual worlds or environments in which characters interact, the use of computers as tools in problem solving and confronting conflict, and characters that are part human, part…
The Impact of Software on Associate Degree Programs in Electronic Engineering Technology.
ERIC Educational Resources Information Center
Hata, David M.
1986-01-01
Assesses the range and extent of computer assisted instruction software available in electronic engineering technology education. Examines the need for software skills in four areas: (1) high-level languages; (2) assembly language; (3) computer-aided engineering; and (4) computer-aided instruction. Outlines strategies for the future in three…
Trusted Computing Technologies, Intel Trusted Execution Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guise, Max Joseph; Wendt, Jeremy Daniel
2011-01-01
We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorizedmore » users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.« less
ERIC Educational Resources Information Center
Eichleay, Kristen; Pressman, Harvey
1987-01-01
Exemplary projects which help disabled people use technology (particularly computers) expand their employment opportunities include: Project Entry (Seattle); Georgia Computer Programmer Project (Atlanta); Perkins Project with Industry (Watertown, Massachusetts); Project Byte (Newton Massachusetts); Technology Relevant to You (St. Louis); Special…
Teachers' Computer Self-Efficacy and Their Use of Educational Technology
ERIC Educational Resources Information Center
Turel, Vehbi
2014-01-01
This study examined the use of educational technology by primary and subject teachers (i.e. secondary and high school teachers) in a small town in the eastern part of Turkey in the spring of 2012. The study examined the primary, secondary and high school teachers': (1) personal and computer related (demographic) characteristics; (2) their computer…
ERIC Educational Resources Information Center
Papastergiou, M.
2008-01-01
This study investigated Greek high school students' intentions and motivation towards and against pursuing academic studies in Computer Science (CS), the influence of the family and the scholastic environment on students' career choices, students' perceptions of CS and the Information Technology (IT) profession as well as students' attendance at…
Network and data security design for telemedicine applications.
Makris, L; Argiriou, N; Strintzis, M G
1997-01-01
The maturing of telecommunication technologies has ushered in a whole new era of applications and services in the health care environment. Teleworking, teleconsultation, mutlimedia conferencing and medical data distribution are rapidly becoming commonplace in clinical practice. As a result, a set of problems arises, concerning data confidentiality and integrity. Public computer networks, such as the emerging ISDN technology, are vulnerable to eavesdropping. Therefore it is important for telemedicine applications to employ end-to-end encryption mechanisms securing the data channel from unauthorized access of modification. We propose a network access and encryption system that is both economical and easily implemented for integration in developing or existing applications, using well-known and thoroughly tested encryption algorithms. Public-key cryptography is used for session-key exchange, while symmetric algorithms are used for bulk encryption. Mechanisms for session-key generation and exchange are also provided.
ERIC Educational Resources Information Center
Henry, Michele
2015-01-01
This study investigated choral singers' comfort level using computer technology for vocal sight-reading assessment. High school choral singers (N = 138) attending a summer music camp completed a computer-based sight-reading assessment and accompanying pre- and posttest surveys on their musical backgrounds and perceptions about technology. A large…
Working in a Text Mine; Is Access about to Go down?
ERIC Educational Resources Information Center
Emery, Jill
2008-01-01
The age of networked research and networked data analysis is upon us. "Wired Magazine" proclaims on the cover of their July 2008 issue: "The End of Science. The quest for knowledge used to begin with grand theories. Now it begins with massive amounts of data. Welcome to the Petabyte Age." Computing technology is sufficiently complex at this point…
2009-06-01
Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In...control of cross-domain dependencies, and management of Title 10 relationships. This literature review of joint doctrine indicates USSTRATCOM...24 III: How Do Combatant Commands Manage
ERIC Educational Resources Information Center
Bukar, Ibrahim Bulama; Bello, Suleiman; Ibi, Mustapha Baba
2016-01-01
Information and Communication Technologies have come to transform and reshape the school structures, curriculum, pedagogies, assessment and evaluation. Despite these advantages, very few institution of learning in Nigeria have been able to explore the inherent benefits of ICT to the fullest. The quest to attain Educational ends in response to the…
ITS, The End of the World as We Know It: Transitioning AIED into a Service-Oriented Ecosystem
ERIC Educational Resources Information Center
Nye, Benjamin D.
2016-01-01
Advanced learning technologies are reaching a new phase of their evolution where they are finally entering mainstream educational contexts, with persistent user bases. However, as AIED scales, it will need to follow recent trends in service-oriented and ubiquitous computing: breaking AIED platforms into distinct services that can be composed for…
A Tools-Based Approach to Teaching Data Mining Methods
ERIC Educational Resources Information Center
Jafar, Musa J.
2010-01-01
Data mining is an emerging field of study in Information Systems programs. Although the course content has been streamlined, the underlying technology is still in a state of flux. The purpose of this paper is to describe how we utilized Microsoft Excel's data mining add-ins as a front-end to Microsoft's Cloud Computing and SQL Server 2008 Business…
Energy Consumption Management of Virtual Cloud Computing Platform
NASA Astrophysics Data System (ADS)
Li, Lin
2017-11-01
For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.
Guo, Qiaohong; Cann, Beverley; McClement, Susan; Thompson, Genevieve; Chochinov, Harvey Max
2017-05-06
Confinement to an in-patient hospital ward impairs patients' sense of social support and connectedness. Providing the means, through communication technology, for patients to maintain contact with friends and family can potentially improve well-being at the end of life by minimizing social isolation and facilitating social connection. This study aimed to explore the feasibility of introducing internet-based communication and information technologies for in-patients and their families and to describe their experience in using this technology. A cross-sectional survey design was used to describe patient and family member experiences in using internet-based communication technology and health care provider views of using such technology in palliative care. Participants included 13 palliative in-patients, 38 family members, and 14 health care providers. An iPad or a laptop computer with password-protected internet access was loaned to each patient and family member for about two weeks or they used their own electronic devices for the duration of the patient's stay. Quantitative and qualitative data were collected from patients, families, and health care providers to discern how patients and families used the technology, its ease of use and its impact. Descriptive statistics and paired sample t-tests were used to analyze quantitative data; qualitative data were analyzed using constant comparative techniques. Palliative patients and family members used the technology to keep in touch with family and friends, entertain themselves, look up information, or accomplish tasks. Most participants found the technology easy to use and reported that it helped them feel better overall, connected to others and calm. The availability of competent, respectful, and caring technical support personnel was highly valued by patients and families. Health care providers identified that computer technology helped patients and families keep others informed about the patient's condition, enabled sharing of important decisions and facilitated access to the outside world. This study confirmed the feasibility of offering internet-based communication and information technologies on palliative care in-patient units. Patients and families need to be provided appropriate technical support to ensure that the technology is used optimally to help them accomplish their goals.
Artificial Intelligence Applications to High-Technology Training.
ERIC Educational Resources Information Center
Dede, Christopher
1987-01-01
Discusses the use of artificial intelligence to improve occupational instruction in complex subjects with high performance goals, such as those required for high-technology jobs. Highlights include intelligent computer assisted instruction, examples in space technology training, intelligent simulation environments, and the need for adult training…
Occupational stress in human computer interaction.
Smith, M J; Conway, F T; Karsh, B T
1999-04-01
There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.
Retractable Pin Tools for the Friction Stir Welding Process
NASA Technical Reports Server (NTRS)
1998-01-01
Two companies have successfully commercialized a specialized welding tool developed at the Marshall Space Flight Center (MSFC). Friction stir welding uses the high rotational speed of a tool and the resulting frictional heat created from contact to crush, 'stir' together, and forge a bond between two metal alloys. It has had a major drawback, reliance on a single-piece pin tool. The pin is slowly plunged into the joint between two materials to be welded and rotated as high speed. At the end of the weld, the single-piece pin tool is retracted and leaves a 'keyhole,' something which is unacceptable when welding cylindrical objects such as drums, pipes and storage tanks. Another drawback is the requirement for different-length pin tools when welding materials of varying thickness. An engineer at the MSFC helped design an automatic retractable pin tool that uses a computer-controlled motor to automatically retract the pin into the shoulder of the tool at the end of the weld, preventing keyholes. This design allows the pin angle and length to be adjusted for changes in material thickness and results in a smooth hole closure at the end of the weld. Benefits of friction stir welding, using the MSFC retractable pin tool technology, include the following: The ability to weld a wide range of alloys, including previously unweldable and composite materials; provision of twice the fatigue resistance of fusion welds and no keyholes; minimization of material distortion; no creation of hazards such as welding fumes, radiation, high voltage, liquid metals, or arcing; automatic retraction of the pin at the end of the weld; and maintaining full penetration of the pin.
WORKSHOP ON MINING IMPACTED NATIVE AMERICAN LANDS CD
Multimedia Technology is an exciting mix of cutting-edge Information Technologies that utilize a variety of interactive structures, digital video and audio technologies, 3-D animation, high-end graphics, and peer-reviewed content that are then combined in a variety of user-friend...
NASA's 3D Flight Computer for Space Applications
NASA Technical Reports Server (NTRS)
Alkalai, Leon
2000-01-01
The New Millennium Program (NMP) Integrated Product Development Team (IPDT) for Microelectronics Systems was planning to validate a newly developed 3D Flight Computer system on its first deep-space flight, DS1, launched in October 1998. This computer, developed in the 1995-97 time frame, contains many new computer technologies previously never used in deep-space systems. They include: advanced 3D packaging architecture for future low-mass and low-volume avionics systems; high-density 3D packaged chip-stacks for both volatile and non-volatile mass memory: 400 Mbytes of local DRAM memory, and 128 Mbytes of Flash memory; high-bandwidth Peripheral Component Interface (Per) local-bus with a bridge to VME; high-bandwidth (20 Mbps) fiber-optic serial bus; and other attributes, such as standard support for Design for Testability (DFT). Even though this computer system did not complete on time for delivery to the DS1 project, it was an important development along a technology roadmap towards highly integrated and highly miniaturized avionics systems for deep-space applications. This continued technology development is now being performed by NASA's Deep Space System Development Program (also known as X2000) and within JPL's Center for Integrated Space Microsystems (CISM).
Usability of a Low-Cost Head Tracking Computer Access Method following Stroke.
Mah, Jasmine; Jutai, Jeffrey W; Finestone, Hillel; Mckee, Hilary; Carter, Melanie
2015-01-01
Assistive technology devices for computer access can facilitate social reintegration and promote independence for people who have had a stroke. This work describes the exploration of the usefulness and acceptability of a new computer access device called the Nouse™ (Nose-as-mouse). The device uses standard webcam and video recognition algorithms to map the movement of the user's nose to a computer cursor, thereby allowing hands-free computer operation. Ten participants receiving in- or outpatient stroke rehabilitation completed a series of standardized and everyday computer tasks using the Nouse™ and then completed a device usability questionnaire. Task completion rates were high (90%) for computer activities only in the absence of time constraints. Most of the participants were satisfied with ease of use (70%) and liked using the Nouse™ (60%), indicating they could resume most of their usual computer activities apart from word-processing using the device. The findings suggest that hands-free computer access devices like the Nouse™ may be an option for people who experience upper motor impairment caused by stroke and are highly motivated to resume personal computing. More research is necessary to further evaluate the effectiveness of this technology, especially in relation to other computer access assistive technology devices.
Computers, Networks, and Desegregation at San Jose High Academy.
ERIC Educational Resources Information Center
Solomon, Gwen
1987-01-01
Describes magnet high school which was created in California to meet desegregation requirements and emphasizes computer technology. Highlights include local computer networks that connect science and music labs, the library/media center, business computer lab, writing lab, language arts skills lab, and social studies classrooms; software; teacher…
The research of computer network security and protection strategy
NASA Astrophysics Data System (ADS)
He, Jian
2017-05-01
With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.
Public Outreach at RAL: Engaging the Next Generation of Scientists and Engineers
NASA Astrophysics Data System (ADS)
Corbett, G.; Ryall, G.; Palmer, S.; Collier, I. P.; Adams, J.; Appleyard, R.
2015-12-01
The Rutherford Appleton Laboratory (RAL) is part of the UK's Science and Technology Facilities Council (STFC). As part of the Royal Charter that established the STFC, the organisation is required to generate public awareness and encourage public engagement and dialogue in relation to the science undertaken. The staff at RAL firmly support this activity as it is important to encourage the next generation of students to consider studying Science, Technology, Engineering, and Mathematics (STEM) subjects, providing the UK with a highly skilled work-force in the future. To this end, the STFC undertakes a variety of outreach activities. This paper will describe the outreach activities undertaken by RAL, particularly focussing on those of the Scientific Computing Department (SCD). These activities include: an Arduino based activity day for 12-14 year-olds to celebrate Ada Lovelace day; running a centre as part of the Young Rewired State - encouraging 11-18 year-olds to create web applications with open data; sponsoring a team in the Engineering Education Scheme - supporting a small team of 16-17 year-olds to solve a real world engineering problem; as well as the more traditional tours of facilities. These activities could serve as an example for other sites involved in scientific computing around the globe.
Center for Advanced Computational Technology
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2000-01-01
The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.
High-Tech: Help or Hindrance to Hispanics in College?
ERIC Educational Resources Information Center
Mellander, Gustavo A.
2007-01-01
The effect of an inability to purchase computers for home use and a lack of computers and instruction at public schools has had on the ability of Hispanic students to develop technology skills related to computer and Internet use is discussed. This article asks if the nation's emphasis on technology in schools has hindered Hispanic access to…
Is Technology-Mediated Parental Monitoring Related to Adolescent Substance Use?
Rudi, Jessie; Dworkin, Jodi
2018-01-03
Prevention researchers have identified parental monitoring leading to parental knowledge to be a protective factor against adolescent substance use. In today's digital society, parental monitoring can occur using technology-mediated communication methods, such as text messaging, email, and social networking sites. The current study aimed to identify patterns, or clusters, of in-person and technology-mediated monitoring behaviors, and examine differences between the patterns (clusters) in adolescent substance use. Cross-sectional survey data were collected from 289 parents of adolescents using Facebook and Amazon Mechanical Turk (MTurk). Cluster analyses were computed to identify patterns of in-person and technology-mediated monitoring behaviors, and chi-square analyses were computed to examine differences in substance use between the identified clusters. Three monitoring clusters were identified: a moderate in-person and moderate technology-mediated monitoring cluster (moderate-moderate), a high in-person and high technology-mediated monitoring cluster (high-high), and a high in-person and low technology-mediated monitoring cluster (high-low). Higher frequency of technology-mediated parental monitoring was not associated with lower levels of substance use. Results show that higher levels of technology-mediated parental monitoring may not be associated with adolescent substance use.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
NASA Astrophysics Data System (ADS)
Bolton, Richard W.; Dewey, Allen; Horstmann, Paul W.; Laurentiev, John
1997-01-01
This paper examines the role virtual enterprises will have in supporting future business engagements and resulting technology requirements. Two representative end-user scenarios are proposed that define the requirements for 'plug-and-play' information infrastructure frameworks and architectures necessary to enable 'virtual enterprises' in US manufacturing industries. The scenarios provide a high- level 'needs analysis' for identifying key technologies, defining a reference architecture, and developing compliant reference implementations. Virtual enterprises are short- term consortia or alliances of companies formed to address fast-changing opportunities. Members of a virtual enterprise carry out their tasks as if they all worked for a single organization under 'one roof', using 'plug-and-play' information infrastructure frameworks and architectures to access and manage all information needed to support the product cycle. 'Plug-and-play' information infrastructure frameworks and architectures are required to enhance collaboration between companies corking together on different aspects of a manufacturing process. This new form of collaborative computing will decrease cycle-time and increase responsiveness to change.
First-Order SPICE Modeling of Extreme-Temperature 4H-SiC JFET Integrated Circuits
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.; Spry, David J.; Chen, Liang-Yu
2016-01-01
A separate submission to this conference reports that 4H-SiC Junction Field Effect Transistor (JFET) digital and analog Integrated Circuits (ICs) with two levels of metal interconnect have reproducibly demonstrated electrical operation at 500 C in excess of 1000 hours. While this progress expands the complexity and durability envelope of high temperature ICs, one important area for further technology maturation is the development of reasonably accurate and accessible computer-aided modeling and simulation tools for circuit design of these ICs. Towards this end, we report on development and verification of 25 C to 500 C SPICE simulation models of first order accuracy for this extreme-temperature durable 4H-SiC JFET IC technology. For maximum availability, the JFET IC modeling is implemented using the baseline-version SPICE NMOS LEVEL 1 model that is common to other variations of SPICE software and importantly includes the body-bias effect. The first-order accuracy of these device models is verified by direct comparison with measured experimental device characteristics.
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver
2014-06-14
Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.
The End of the Rainbow? Color Schemes for Improved Data Graphics
NASA Astrophysics Data System (ADS)
Light, Adam; Bartlein, Patrick J.
2004-10-01
Modern computer displays and printers enable the widespread use of color in scientific communication, but the expertise for designing effective graphics has not kept pace with the technology for producing them. Historically, even the most prestigious publications have tolerated high defect rates in figures and illustrations, and technological advances that make creating and reproducing graphics easier do not appear to have decreased the frequency of errors. Flawed graphics consequently beget more flawed graphics as authors emulate published examples. Color has the potential to enhance communication, but design mistakes can result in color figures that are less effective than gray scale displays of the same data. Empirical research on human subjects can build a fundamental understanding of visual perception and scientific methods can be used to evaluate existing designs, but creating effective data graphics is a design task and not fundamentally a scientific pursuit. Like writing well, creating good data graphics requires a combination of formal knowledge and artistic sensibility tempered by experience: a combination of ``substance, statistics, and design''.
Three-Dimensional Media Technologies: Potentials for Study in Visual Literacy.
ERIC Educational Resources Information Center
Thwaites, Hal
This paper presents an overview of three-dimensional media technologies (3Dmt). Many of the new 3Dmt are the direct result of interactions of computing, communications, and imaging technologies. Computer graphics are particularly well suited to the creation of 3D images due to the high resolution and programmable nature of the current displays.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branscomb, L.; Hurley, D.; Keller, J.
1998-04-01
This project was undertaken to explore new options for connecting homes and small businesses to high-speed communications networks, such as the Internet. Fundamental to this inquiry was an interest in looking at options which are newly enabled through changes in technology and regulation, and which go beyond the traditional topdown, centralized model for local access. In particular, the authors focused on opportunities for end-user and community-level investment. This project was intended to investigate the opportunities presented by the decreasing cost of computing and networking platforms, the unbundling of local exchange network elements, and the intelligent endpoints model of networking bestmore » exemplified by the Internet. Do these factors, along with communications technologies such as spread spectrum wireless, digital subscriber line services, and the ability to modulate a communications signal over the electric power line infrastructure, enable new models for end-user investment in intelligent infrastructure as a leverage point for accessing the broadband network? This question was first explored through a two-day conference held at the Freedom Forum in Arlington, Virginia, October 29 and 30, 1996. The workshop addressed issues in the consumer adoption of new communications technologies, use of the electric power line infrastructure, the role of municipalities, and the use of alternative technologies, such as XDSL, satellite, spread spectrum wireless, LMDS, and others. The best of these papers have been further developed, with editorial guidance provided by Harvard, and compiled in the form of a book (The First 100 Feet: New Options for Internet and Broadband Access, Deborah Hurley and James Keller, eds., MIT Press, 1998) to be published as part of the MIT Press Spring 1998 catalogue. A summary of topics covered by the book is given in this report.« less
Finding a roadmap to achieve large neuromorphic hardware systems
Hasler, Jennifer; Marr, Bo
2013-01-01
Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330
Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing
NASA Astrophysics Data System (ADS)
Chine, Karim
The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.
3D printing of preclinical X-ray computed tomographic data sets.
Doney, Evan; Krumdick, Lauren A; Diener, Justin M; Wathen, Connor A; Chapman, Sarah E; Stamile, Brian; Scott, Jeremiah E; Ravosa, Matthew J; Van Avermaete, Tony; Leevy, W Matthew
2013-03-22
Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.(1) However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.(2) These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. (3, 4) The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.
Schick-Makaroff, Kara; Molzahn, Anita
2014-01-01
Electronic capture of patients' reports of their health is significant in clinical nephrology research because health-related quality of life (HRQOL) for patients with end-stage renal disease is compromised and assessment by patients of their HRQOL in practice is relatively uncommon. The purpose of this study was to evaluate patient satisfaction with and time involved in administering HRQOL and symptom assessment measures using tablet computers in two outpatient home dialysis clinics. A cross-sectional observational study design was employed. The study was conducted in two home dialysis clinics. Fifty-six patients participated in the study; 35 males (63%) and 21 females (37%) with a mean age of 66 ± 12 (36-90 years old) were included. Forty-nine participants were on peritoneal dialysis (87%), 6 on home hemodialysis (11%), and 1 on nocturnal home hemodialysis (2%). Measures included the Kidney Disease Quality of Life-36 (KDQOL-36), the Edmonton Symptom Assessment Scale (ESAS) and Participant's Level of Satisfaction in Using a Tablet Computer. Using a tablet computer, participants completed the three measures. Descriptive statistics and bivariate correlations were calculated. Participants' satisfaction with use of the tablet computer was high; 66% were "very satisfied", 7% "satisfied", 2% "slightly satisfied", and 18% "neutral". On the 7-point Likert-type scale, the mean satisfaction score was 5.11 (SD = 1.6). Mean time to complete the measures was: Level of Satisfaction 1.15 minutes (SD = 0.41), ESAS 2.55 minutes (SD = 1.04), and KDQOL 9.56 minutes (SD = 2.03); the mean time to complete all three instruments was 13.19 minutes (SD = 2.42). There were no significant correlations between level of satisfaction and age, gender, HRQOL, time taken to complete surveys, computer experience, or comfort with technology. Comfort with technology and computer experience were highly correlated, r = .7, p (one-tailed) < 0.01. Limitations include lack of generalizability because of a small self-selected sample of relatively healthy patients and a lack of psychometric testing on the measure of satisfaction. Participants were satisfied with the platform and the time involved for completion of instruments was modest. Routine use of HRQOL measures for clinical purposes may be facilitated through use of tablet computers.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.
This report discusses Senate Bill no. 272, which provides for a coordinated federal research and development program to ensure continued U.S. leadership in high-performance computing. High performance computing is defined as representing the leading edge of technological advancement in computing, i.e., the most sophisticated computer chips, the…
SANs and Large Scale Data Migration at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Salmon, Ellen M.
2004-01-01
Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.
Benchmarking high performance computing architectures with CMS’ skeleton framework
NASA Astrophysics Data System (ADS)
Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.
2017-10-01
In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.
ERIC Educational Resources Information Center
Lee, Kathryn S.; Smith, Shaunna; Bos, Beth
2014-01-01
This article reports a heuristic case study that explored how components of Technological Pedagogical Knowledge (TPK) manifested in the artifacts of post-Baccalaureate pre-service teachers. Self-reported perceptions of their technology integration competencies were high. End-of-semester presentations reflected three distinct views of technology…
High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems
2017-05-01
addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Hack, James; Riley, Katherine
The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A
2017-03-01
Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Cooperative Electronic Mail: Effective Communication Technology for Introductory Chemistry
NASA Astrophysics Data System (ADS)
Pence, Laura E.
1999-05-01
One drawback to using cooperative learning in the classroom is that it takes up class time and reduces the amount of content that can be covered during a semester. Cooperative electronic mail is an excellent alternate method of using cooperative learning that shifts the medium of interaction to the computer and encourages students to learn to communicate effectively through technology. In this project, three types of exercises were assigned, one prior to each exam. These three assignments were (i) an open-ended question, (ii) a traditional cooperative activity done electronically, and (iii) an exercise to allow students to write exam questions for each other. The average participation rate in the exercises was 90% over four semesters, which indicated that the project was an effective incentive to get students to use email regularly. The evaluations of the project were also extremely positive. One surprising result of the assessment was that female students gave even more favorable responses than men, suggesting that this project was an excellent way to encourage women to use computer technology.
The Prospects of Whole Brain Emulation within the next Half- Century
NASA Astrophysics Data System (ADS)
Eth, Daniel; Foust, Juan-Carlos; Whale, Brandon
2013-12-01
Whole Brain Emulation (WBE), the theoretical technology of modeling a human brain in its entirety on a computer-thoughts, feelings, memories, and skills intact-is a staple of science fiction. Recently, proponents of WBE have suggested that it will be realized in the next few decades. In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE's future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
2010-12-01
with high correlation immunity and then evaluate these functions for other desirable cryptographic features. C. METHOD The only known primary methods...out if not used) # ---------------------------------- # PRIMARY = < primary file 1> < primary file 2> #SECONDARY = <secondary file 1...finding the fuction value for a //set u and for each value of v. end end
ERIC Educational Resources Information Center
Supej, Matej; Holmberg, Hans-Christer
2011-01-01
Accurate time measurement is essential to temporal analysis in sport. This study aimed to (a) develop a new method for time computation from surveyed trajectories using a high-end global navigation satellite system (GNSS), (b) validate its precision by comparing GNSS with photocells, and (c) examine whether gate-to-gate times can provide more…
Design and implementation of a low-power SOI CMOS receiver
NASA Astrophysics Data System (ADS)
Zencir, Ertan
There is a strong demand for wireless communications in civilian and military applications, and space explorations. This work attempts to implement a low-power, high-performance fully-integrated receiver for deep space communications using Silicon on Insulator (SOI) CMOS technology. Design and implementation of a UHF low-IF receiver front-end in a 0.35-mum SOI CMOS technology are presented. Problems and challenges in implementing a highly integrated receiver at UHF are identified. Low-IF architecture, suitable for low-power design, has been adopted to mitigate the noise at the baseband. Design issues of the receiver building blocks including single-ended and differential LNA's, passive and active mixers, and variable gain/bandwidth complex filters are discussed. The receiver is designed to have a variable conversion gain of more than 100 dB with a 70 dB image rejection and a power dissipation of 45 mW from a 2.5-V supply. Design and measured performance of the LNA's, and the mixer are presented. Measurement results of RF front-end blocks including a single-ended LNA, a differential LNA, and a double-balanced mixer demonstrate the low power realizability of RF front-end circuits in SOI CMOS technology. We also report on the design and simulation of the image-rejecting complex IF filter and the full receiver circuit. Gain, noise, and linearity performance of the receiver components prove the viability of fully integrated low-power receivers in SOI CMOS technology.
Stuck in the Shallow End: Education, Race, and Computing. Updated Edition
ERIC Educational Resources Information Center
Margolis, Jane
2017-01-01
The number of African Americans and Latino/as receiving undergraduate and advanced degrees in computer science is disproportionately low. And relatively few African American and Latino/a high school students receive the kind of institutional encouragement, educational opportunities, and preparation needed for them to choose computer science as a…
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Zamanyan, Alen; Torri, Federica; Macciardi, Fabio; Hobel, Sam; Moon, Seok Woo; Sung, Young Hee; Jiang, Zhiguo; Labus, Jennifer; Kurth, Florian; Ashe-McNalley, Cody; Mayer, Emeran; Vespa, Paul M.; Van Horn, John D.; Toga, Arthur W.
2013-01-01
The volume, diversity and velocity of biomedical data are exponentially increasing providing petabytes of new neuroimaging and genetics data every year. At the same time, tens-of-thousands of computational algorithms are developed and reported in the literature along with thousands of software tools and services. Users demand intuitive, quick and platform-agnostic access to data, software tools, and infrastructure from millions of hardware devices. This explosion of information, scientific techniques, computational models, and technological advances leads to enormous challenges in data analysis, evidence-based biomedical inference and reproducibility of findings. The Pipeline workflow environment provides a crowd-based distributed solution for consistent management of these heterogeneous resources. The Pipeline allows multiple (local) clients and (remote) servers to connect, exchange protocols, control the execution, monitor the states of different tools or hardware, and share complete protocols as portable XML workflows. In this paper, we demonstrate several advanced computational neuroimaging and genetics case-studies, and end-to-end pipeline solutions. These are implemented as graphical workflow protocols in the context of analyzing imaging (sMRI, fMRI, DTI), phenotypic (demographic, clinical), and genetic (SNP) data. PMID:23975276
NASA Langley Research Center's distributed mass storage system
NASA Technical Reports Server (NTRS)
Pao, Juliet Z.; Humes, D. Creig
1993-01-01
There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.
NASA Astrophysics Data System (ADS)
Schrage, J.; Soenmez, Y.; Happel, T.; Gubler, U.; Lukowicz, P.; Mrozynski, G.
2006-02-01
From long haul, metro access and intersystem links the trend goes to applying optical interconnection technology at increasingly shorter distances. Intrasystem interconnects such as data busses between microprocessors and memory blocks are still based on copper interconnects today. This causes a bottleneck in computer systems since the achievable bandwidth of electrical interconnects is limited through the underlying physical properties. Approaches to solve this problem by embedding optical multimode polymer waveguides into the board (electro-optical circuit board technology, EOCB) have been reported earlier. The principle feasibility of optical interconnection technology in chip-to-chip applications has been validated in a number of projects. For reasons of cost considerations waveguides with large cross sections are used in order to relax alignment requirements and to allow automatic placement and assembly without any active alignment of components necessary. On the other hand the bandwidth of these highly multimodal waveguides is restricted due to mode dispersion. The advance of WDM technology towards intrasystem applications will provide sufficiently high bandwidth which is required for future high-performance computer systems: Assuming that, for example, 8 wavelength-channels with 12Gbps (SDR1) each are given, then optical on-board interconnects with data rates a magnitude higher than the data rates of electrical interconnects for distances typically found at today's computer boards and backplanes can be realized. The data rate will be twice as much, if DDR2 technology is considered towards the optical signals as well. In this paper we discuss an approach for a hybrid integrated optoelectronic WDM package which might enable the application of WDM technology to EOCB.
A laboratory breadboard system for dual-arm teleoperation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Szakaly, Z.; Kim, W. S.
1990-01-01
The computing architecture of a novel dual-arm teleoperation system is described. The novelty of this system is that: (1) the master arm is not a replica of the slave arm; it is unspecific to any manipulator and can be used for the control of various robot arms with software modifications; and (2) the force feedback to the general purpose master arm is derived from force-torque sensor data originating from the slave hand. The computing architecture of this breadboard system is a fully synchronized pipeline with unique methods for data handling, communication and mathematical transformations. The computing system is modular, thus inherently extendable. The local control loops at both sites operate at 100 Hz rate, and the end-to-end bilateral (force-reflecting) control loop operates at 200 Hz rate, each loop without interpolation. This provides high-fidelity control. This end-to-end system elevates teleoperation to a new level of capabilities via the use of sensors, microprocessors, novel electronics, and real-time graphics displays. A description is given of a graphic simulation system connected to the dual-arm teleoperation breadboard system. High-fidelity graphic simulation of a telerobot (called Phantom Robot) is used for preview and predictive displays for planning and for real-time control under several seconds communication time delay conditions. High fidelity graphic simulation is obtained by using appropriate calibration techniques.
Data systems and computer science space data systems: Onboard networking and testbeds
NASA Technical Reports Server (NTRS)
Dalton, Dan
1991-01-01
The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.
Frank, Seth
2003-01-01
When we think about health care IT, we don't just think about clinical automation with the movement to computerized physician order entry (CPOE), but also the need to upgrade legacy financial and administrative systems to interact with clinical systems. Technology acceptance by physicians remains low, and computer use by physicians for data entry and analysis remains minimal. We expect this trend to change, and expect increased automation to represent gradual change. The HCIT space is dynamic, with many opportunities, but also many challenges. The unique nature of the end market buyers, existing business models, and nature of the technology makes this a challenging but dynamic area for equity investment.
Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites
NASA Astrophysics Data System (ADS)
2002-03-01
Many satellites are an integrated collection of sensors and actuators that require dedicated real-time control. For single processor systems, additional sensors require an increase in computing power and speed to provide the multi-tasking capability needed to service each sensor. Faster processors cost more and consume more power, which taxes a satellite's power resources and may lead to shorter satellite lifetimes. An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator. The design of microdot, a four-bit microcontroller for distributed low-end computing, is presented. The design is based on previous research completed at the Space Electronics Branch, Air Force Research Laboratory (AFRL/VSSE) at Kirtland AFB, NM, and the Air Force Institute of Technology at Wright-Patterson AFB, OH. The Microdot has 29 instructions and a 1K x 4 instruction memory. The distributed computing architecture is based on the Philips Semiconductor I2C Serial Bus Protocol. A prototype was implemented and tested using an Altera Field Programmable Gate Array (FPGA). The prototype was operable to 9.1 MHz. The design was targeted for fabrication in a radiation-hardened-by-design gate-array cell library for the TSMC 0.35 micrometer CMOS process.
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1994-01-01
In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.
Alkhateeb, Haitham M
2002-02-01
This study was designed to compare achievement, attitudes toward success in mathematics, and mathematics anxiety of college students taught brief calculus using a graphic calculator, with the achievement and attitudes and anxiety of students taught using the computer algebra system Maple, using a technology based text book. 50 men and 50 women, students in three classes at a large public university in the southwestern United States, participated. Students' achievement in brief calculus was measured by performance on a teacher-made achievement test given at the end of the study. Analysis of variance showed no significant difference in achievement between the groups. To measure change in attitudes and anxiety, responses to paper-and-pencil inventories indicated significant differences in favor of students using the computer.
Gerard, Adrienne; van den Bogaard, Alberts
2008-01-01
Along with the international trends in history of computing, Dutch contributions over the past twenty years moved away from a focus on machinery to the broader scope of use of computers, appropriation of computing technologies in various traditions, labour relations and professionalisation issues, and, lately, software. It is only natural that an emerging field like computer science sets out to write its genealogy and canonise the important steps in its intellectual endeavour. It is fair to say that a historiography diverging from such "home" interest, started in 1987 with the work of Eda Kranakis--then active in The Netherlands--commissioned by the national bureau for technology assessment, and Gerard Alberts, turning a commemorative volume of the Mathematical Center into a history of the same institute. History of computing in The Netherlands made a major leap in the spring of 1994 when Dirk de Wit, Jan van den Ende and Ellen van Oost defended their dissertations, on the roads towards adoption of computing technology in banking, in science and engineering, and on the gender aspect in computing. Here, history of computing had already moved from machines to the use of computers. The three authors joined Gerard Alberts and Onno de Wit in preparing a volume on the rise of IT in The Netherlands, the sequel of which in now in preparation in a team lead by Adrienne van den Bogaard. Dutch research reflected the international attention for professionalisation issues (Ensmenger, Haigh) very early on in the dissertation by Ruud van Dael, Something to do with computers (2001) revealing how occupations dealing with computers typically escape the pattern of closure by professionalisation as expected by the, thus outdated, sociology of professions. History of computing not only takes use and users into consideration, but finally, as one may say, confronts the technological side of putting the machine to use, software, head on. The groundbreaking works of the 2000 Paderborn meeting and by Martin Campbell-Kelly resonate in work done in The Netherlands and recently in a major research project sponsored by the European Science Foundation: Software for Europe. The four contributions to this issue offer a true cross-section of ongoing history of computing in The Netherlands. Gerard Alberts and Huub de Beer return to the earliest computers at the Mathematical Center. As they do so under the perspective of using the machines, the result is, let us say, remarkable. Adrienne van den Bogaard compares the styles of software as practiced by Van der Poel and Dijkstra: so much had these two pioneers in common, so different the consequences they took. Frank Veraart treats us with an excerpt from his recent dissertation on the domestication of the micro computer technology: appropriation of computing technology is shown by the role of intermediate actors. Onno de Wit, finally, gives an account of the development, prior to internet, of a national data communication network among large scale users and its remarkable persistence under competition with new network technologies.
Envisioning future cognitive telerehabilitation technologies: a co-design process with clinicians.
How, Tuck-Voon; Hwang, Amy S; Green, Robin E A; Mihailidis, Alex
2017-04-01
Purpose Cognitive telerehabilitation is the concept of delivering cognitive assessment, feedback, or therapeutic intervention at a distance through technology. With the increase of mobile devices, wearable sensors, and novel human-computer interfaces, new possibilities are emerging to expand the cognitive telerehabilitation paradigm. This research aims to: (1) explore design opportunities and considerations when applying emergent pervasive computing technologies to cognitive telerehabilitation and (2) develop a generative co-design process for use with rehabilitation clinicians. Methods We conducted a custom co-design process that used design cards, probes, and design sessions with traumatic brain injury (TBI) clinicians. All field notes and transcripts were analyzed qualitatively. Results Potential opportunities for TBI cognitive telerehabilitation exist in the areas of communication competency, executive functioning, emotional regulation, energy management, assessment, and skill training. Designers of TBI cognitive telerehabilitation technologies should consider how technologies are adapted to a patient's physical/cognitive/emotional state, their changing rehabilitation trajectory, and their surrounding life context (e.g. social considerations). Clinicians were receptive to our co-design approach. Conclusion Pervasive computing offers new opportunities for life-situated cognitive telerehabilitation. Convivial design methods, such as this co-design process, are a helpful way to explore new design opportunities and an important space for further methodological development. Implications for Rehabilitation Designers of rehabilitation technologies should consider how to extend current design methods in order to facilitate the creative contribution of rehabilitation stakeholders. This co-design approach enables a fuller participation from rehabilitation clinicians at the front-end of design. Pervasive computing has the potential to: extend the duration and intensity of cognitive telerehabilitation training (including the delivery of 'booster' sessions or maintenance therapies); provide assessment and treatment in the context of a traumatic brain injury (TBI) patient's everyday life (thereby enhancing generalization); and permit time-sensitive interventions. Long-term use of pervasive computing for TBI cognitive telerehabilitation should take into account a patient's changing recovery trajectory, their meaningful goals, and their journey from loss to redefinition.
ERIC Educational Resources Information Center
Simburg, Suzanne; Roza, Marguerite
2012-01-01
Even as new educational technologies have emerged, staffing innovations have seemed all but impossible in American schools. Charter and district schools alike long ago surrendered to the notion that education requires at least as many core teachers as is determined from dividing enrollment by class size. A few new school designs suggest that we…
The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era.
ERIC Educational Resources Information Center
Rifkin, Jeremy
This book explores the global economic and social changes that will likely occur as continued technological advancements (especially in the field of computer science) reduce the number of workers needed to produce the goods and services needed by the global population. The book is divided into five sections. Section 1 presents an overview of the…
15 CFR 762.2 - Records to be retained.
Code of Federal Regulations, 2011 CFR
2011-01-01
... pertaining to the types of transactions described in § 762.1(a) of this part, which are made or obtained by a..., High Performance Computers; (7) supplement No. 3 to part 742 High Performance Computers, Safeguards and...; (44) § 745.2, End-use certificates; (45) § 758.2(c), Assumption writing; and (46) § 734.4(g), de...
The Effect of Color Choice on Learner Interpretation of a Cosmology Visualization
ERIC Educational Resources Information Center
Buck, Zoe
2013-01-01
As we turn more and more to high-end computing to understand the Universe at cosmological scales, dynamic visualizations of simulations will take on a vital role as perceptual and cognitive tools. In collaboration with the Adler Planetarium and University of California High-Performance AstroComputing Center (UC-HiPACC), I am interested in better…
Use of information and communication technology among dental students at the University of Jordan.
Rajab, Lamis D; Baqain, Zaid H
2005-03-01
The aim of this study was to investigate the current knowledge, skills, and opinions of undergraduate dental students at the University of Jordan with respect to information communication technology (ICT). Dental students from the second, third, fourth, and fifth years were asked to complete a questionnaire presented in a lecture at the end of the second semester in the 2002-03 academic year. The response rate was 81 percent. Besides free and unlimited access to computers at the school of dentistry, 74 percent of the students had access to computers at home. However, 44 percent did not use a computer regularly. Male students were more regular and longer users of computers than females (p<0.001). A significant number of students (70 percent) judged themselves competent in information technology (IT) skills. More males felt competent in basic IT skills than did females (p<0.05). More than two-thirds acquired their computer skills through sources other than at the university. The main educational use of computers was accessing the Internet, word processing, multimedia, presentations, Medline search, and data management. More clinical students felt competent in word-processing skills (p<0.05) and many more used word processing for their studies (p<0.001) than did preclinical students. More males used word processing for their studies than females (p<0.001). Students used computers for personal activities more frequently than for academic reasons. More males used computers for both academic (p<0.01) and personal activities (p<0.001) than did females. All students had access to the Internet at the university, and 54 percent had access at home. A high percentage of students (94 percent) indicated they were comfortable using the Internet, 75 percent said they were confident in the accuracy, and 80 percent said they were confident in the relevance of information obtained from the Internet. Most students (90 percent) used email. Most students (83 percent) supported the idea of placing lectures on the web, and 61.2 percent indicated that this would not influence lecture attendance. Students used the Internet more for personal reasons than for the study of dentistry. More clinical students used the Internet for dentistry than preclinical students (p<0.001). More males than females used the Internet for dentistry (p<0.01) as well as for pleasure (p<0.01). Time and availability were the main obstacles to Internet use. Dental students at the University of Jordan have access to substantial IT resources and demonstrated attitudes toward the computer and Internet technology and use that were similar to other students in other nations. However, the educational use of ICT among Jordanian students remains low.
Efficient heart beat detection using embedded system electronics
NASA Astrophysics Data System (ADS)
Ramasamy, Mouli; Oh, Sechang; Varadan, Vijay K.
2014-04-01
The present day bio-technical field concentrates on developing various types of innovative ambulatory and wearable devices to monitor several bio-physical, physio-pathological, bio-electrical and bio-potential factors to assess a human body's health condition without intruding quotidian activities. One of the most important aspects of this evolving technology is monitoring heart beat rate and electrocardiogram (ECG) from which many other subsidiary results can be derived. Conventionally, the devices and systems consumes a lot of power since the acquired signals are always processed on the receiver end. Because of this back end processing, the unprocessed raw data is transmitted resulting in usage of more power, memory and processing time. This paper proposes an innovative technique where the acquired signals are processed by a microcontroller in the front end of the module and just the processed signal is then transmitted wirelessly to the display unit. Therefore, power consumption is considerably reduced and clearer data analysis is performed within the module. This also avoids the need for the user to be educated about usage of the device and signal/system analysis, since only the number of heart beats will displayed at the user end. Additionally, the proposed concept also eradicates the other disadvantages like obtrusiveness, high power consumption and size. To demonstrate the above said factors, a commercial controller board was used to extend the monitoring method by using the saved ECG data from a computer.
Lee, Jungwoo; Kohl, Nathaniel; Shanbhang, Sachin; Parekkadan, Biju
2015-12-01
Microfluidic technologies have substantially advanced cancer research by enabling the isolation of rare circulating tumor cells (CTCs) for diagnostic and prognostic purposes. The characterization of isolated CTCs has been limited due to the difficulty in recovering and growing isolated cells with high fidelity. Here, we present a strategy that uses a 3D scaffold, integrated into a microfludic device, as a transferable substrate that can be readily isolated after device operation for serial use in vivo as a transplanted tissue bed. Hydrogel scaffolds were incorporated into a PDMS fluidic chamber prior to bonding and were rehydrated in the chamber after fluid contact. The hydrogel matrix completely filled the fluid chamber, significantly increasing the surface area to volume ratio, and could be directly visualized under a microscope. Computational modeling defined different flow and pressure regimes that guided the conditions used to operate the chip. As a proof of concept using a model cell line, we confirmed human prostate tumor cell attachment in the microfluidic scaffold chip, retrieval of the scaffold en masse, and serial implantation of the scaffold to a mouse model with preserved xenograft development. With further improvement in capture efficiency, this approach can offer an end-to-end platform for the continuous study of isolated cancer cells from a biological fluid to a xenograft in mice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
Virtual machine (VM) technologies, especially those offered via Cloud platforms, present new dimensions with respect to performance and cost in executing parallel discrete event simulation (PDES) applications. Due to the introduction of overall cost as a metric, the choice of the highest-end computing configuration is no longer the most economical one. Moreover, runtime dynamics unique to VM platforms introduce new performance characteristics, and the variety of possible VM configurations give rise to a range of choices for hosting a PDES run. Here, an empirical study of these issues is undertaken to guide an understanding of the dynamics, trends and trade-offsmore » in executing PDES on VM/Cloud platforms. Performance results and cost measures are obtained from actual execution of a range of scenarios in two PDES benchmark applications on the Amazon Cloud offerings and on a high-end VM host machine. The data reveals interesting insights into the new VM-PDES dynamics that come into play and also leads to counter-intuitive guidelines with respect to choosing the best and second-best configurations when overall cost of execution is considered. In particular, it is found that choosing the highest-end VM configuration guarantees neither the best runtime nor the least cost. Interestingly, choosing a (suitably scaled) low-end VM configuration provides the least overall cost without adversely affecting the total runtime.« less
Analogy Mapping Development for Learning Programming
NASA Astrophysics Data System (ADS)
Sukamto, R. A.; Prabawa, H. W.; Kurniawati, S.
2017-02-01
Programming skill is an important skill for computer science students, whereas nowadays, there many computer science students are lack of skills and information technology knowledges in Indonesia. This is contrary with the implementation of the ASEAN Economic Community (AEC) since the end of 2015 which is the qualified worker needed. This study provided an effort for nailing programming skills by mapping program code to visual analogies as learning media. The developed media was based on state machine and compiler principle and was implemented in C programming language. The state of every basic condition in programming were successful determined as analogy visualization.
Flow characteristics in narrowed coronary bypass graft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernad, S. I.; Bosioc, A.; Totorean, A. F.
2016-06-08
Tortuous saphenous vein graft (SVG) hemodynamics was investigated using computational fluid dynamics (CFD) techniques. Computed tomography (CT) technology is used for non-invasive bypass graft assessment 7 days after surgery. CT investigation shown two regions with severe shape remodelling first is an elbow type contortion and second is a severe curvature with tortuous area reduction. In conclusion, the helical flow induced by vessel torsion may stabilize the blood flow in the distal part of the SVG, reducing the flow disturbance and suppressing the flow separation, but in the distal end of the graft, promote the inflammatory processes in the vessels.
Networking for large-scale science: infrastructure, provisioning, transport and application mapping
NASA Astrophysics Data System (ADS)
Rao, Nageswara S.; Carter, Steven M.; Wu, Qishi; Wing, William R.; Zhu, Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M.
2005-01-01
Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.
NASA Astrophysics Data System (ADS)
Molina-Perez, Edmundo
It is widely recognized that international environmental technological change is key to reduce the rapidly rising greenhouse gas emissions of emerging nations. In 2010, the United Nations Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) agreed to the creation of the Green Climate Fund (GCF). This new multilateral organization has been created with the collective contributions of COP members, and has been tasked with directing over USD 100 billion per year towards investments that can enhance the development and diffusion of clean energy technologies in both advanced and emerging nations (Helm and Pichler, 2015). The landmark agreement arrived at the COP 21 has reaffirmed the key role that the GCF plays in enabling climate mitigation as it is now necessary to align large scale climate financing efforts with the long-term goals agreed at Paris 2015. This study argues that because of the incomplete understanding of the mechanics of international technological change, the multiplicity of policy options and ultimately the presence of climate and technological change deep uncertainty, climate financing institutions such as the GCF, require new analytical methods for designing long-term robust investment plans. Motivated by these challenges, this dissertation shows that the application of new analytical methods, such as Robust Decision Making (RDM) and Exploratory Modeling (Lempert, Popper and Bankes, 2003) to the study of international technological change and climate policy provides useful insights that can be used for designing a robust architecture of international technological cooperation for climate change mitigation. For this study I developed an exploratory dynamic integrated assessment model (EDIAM) which is used as the scenario generator in a large computational experiment. The scope of the experimental design considers an ample set of climate and technological scenarios. These scenarios combine five sources of uncertainty: climate change, elasticity of substitution between renewable and fossil energy and three different sources of technological uncertainty (i.e. R&D returns, innovation propensity and technological transferability). The performance of eight different GCF and non-GCF based policy regimes is evaluated in light of various end-of-century climate policy targets. Then I combine traditional scenario discovery data mining methods (Bryant and Lempert, 2010) with high dimensional stacking methods (Suzuki, Stem and Manzocchi, 2015; Taylor et al., 2006; LeBlanc, Ward and Wittels, 1990) to quantitatively characterize the conditions under which it is possible to stabilize greenhouse gas emissions and keep temperature rise below 2°C before the end of the century. Finally, I describe a method by which it is possible to combine the results of scenario discovery with high-dimensional stacking to construct a dynamic architecture of low cost technological cooperation. This dynamic architecture consists of adaptive pathways (Kwakkel, Haasnoot and Walker, 2014; Haasnoot et al., 2013) which begin with carbon taxation across both regions as a critical near term action. Then in subsequent phases different forms of cooperation are triggered depending on the unfolding climate and technological conditions. I show that there is no single policy regime that dominates over the entire uncertainty space. Instead I find that it is possible to combine these different architectures into a dynamic framework for technological cooperation across regions that can be adapted to unfolding climate and technological conditions which can lead to a greater rate of success and to lower costs in meeting the end-of-century climate change objectives agreed at the 2015 Paris Conference of the Parties. Keywords: international technological change, emerging nations, climate change, technological uncertainties, Green Climate Fund.
Re-Form: FPGA-Powered True Codesign Flow for High-Performance Computing In The Post-Moore Era
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappello, Franck; Yoshii, Kazutomo; Finkel, Hal
Multicore scaling will end soon because of practical power limits. Dark silicon is becoming a major issue even more than the end of Moore’s law. In the post-Moore era, the energy efficiency of computing will be a major concern. FPGAs could be a key to maximizing the energy efficiency. In this paper we address severe challenges in the adoption of FPGA in HPC and describe “Re-form,” an FPGA-powered codesign flow.
Aeronautics research and technology program and specific objectives
NASA Technical Reports Server (NTRS)
1981-01-01
Aeronautics research and technology program objectives in fluid and thermal physics, materials and structures, controls and guidance, human factors, multidisciplinary activities, computer science and applications, propulsion, rotorcraft, high speed aircraft, subsonic aircraft, and rotorcraft and high speed aircraft systems technology are addressed.
D'Souza, Mark; Sulakhe, Dinanath; Wang, Sheng; Xie, Bing; Hashemifar, Somaye; Taylor, Andrew; Dubchak, Inna; Conrad Gilliam, T; Maltsev, Natalia
2017-01-01
Recent technological advances in genomics allow the production of biological data at unprecedented tera- and petabyte scales. Efficient mining of these vast and complex datasets for the needs of biomedical research critically depends on a seamless integration of the clinical, genomic, and experimental information with prior knowledge about genotype-phenotype relationships. Such experimental data accumulated in publicly available databases should be accessible to a variety of algorithms and analytical pipelines that drive computational analysis and data mining.We present an integrated computational platform Lynx (Sulakhe et al., Nucleic Acids Res 44:D882-D887, 2016) ( http://lynx.cri.uchicago.edu ), a web-based database and knowledge extraction engine. It provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization. It gives public access to the Lynx integrated knowledge base (LynxKB) and its analytical tools via user-friendly web services and interfaces. The Lynx service-oriented architecture supports annotation and analysis of high-throughput experimental data. Lynx tools assist the user in extracting meaningful knowledge from LynxKB and experimental data, and in the generation of weighted hypotheses regarding the genes and molecular mechanisms contributing to human phenotypes or conditions of interest. The goal of this integrated platform is to support the end-to-end analytical needs of various translational projects.
A rhythm-based authentication scheme for smart media devices.
Lee, Jae Dong; Jeong, Young-Sik; Park, Jong Hyuk
2014-01-01
In recent years, ubiquitous computing has been rapidly emerged in our lives and extensive studies have been conducted in a variety of areas related to smart devices, such as tablets, smartphones, smart TVs, smart refrigerators, and smart media devices, as a measure for realizing the ubiquitous computing. In particular, smartphones have significantly evolved from the traditional feature phones. Increasingly higher-end smartphone models that can perform a range of functions are now available. Smart devices have become widely popular since they provide high efficiency and great convenience for not only private daily activities but also business endeavors. Rapid advancements have been achieved in smart device technologies to improve the end users' convenience. Consequently, many people increasingly rely on smart devices to store their valuable and important data. With this increasing dependence, an important aspect that must be addressed is security issues. Leaking of private information or sensitive business data due to loss or theft of smart devices could result in exorbitant damage. To mitigate these security threats, basic embedded locking features are provided in smart devices. However, these locking features are vulnerable. In this paper, an original security-locking scheme using a rhythm-based locking system (RLS) is proposed to overcome the existing security problems of smart devices. RLS is a user-authenticated system that addresses vulnerability issues in the existing locking features and provides secure confidentiality in addition to convenience.
A Rhythm-Based Authentication Scheme for Smart Media Devices
Lee, Jae Dong; Park, Jong Hyuk
2014-01-01
In recent years, ubiquitous computing has been rapidly emerged in our lives and extensive studies have been conducted in a variety of areas related to smart devices, such as tablets, smartphones, smart TVs, smart refrigerators, and smart media devices, as a measure for realizing the ubiquitous computing. In particular, smartphones have significantly evolved from the traditional feature phones. Increasingly higher-end smartphone models that can perform a range of functions are now available. Smart devices have become widely popular since they provide high efficiency and great convenience for not only private daily activities but also business endeavors. Rapid advancements have been achieved in smart device technologies to improve the end users' convenience. Consequently, many people increasingly rely on smart devices to store their valuable and important data. With this increasing dependence, an important aspect that must be addressed is security issues. Leaking of private information or sensitive business data due to loss or theft of smart devices could result in exorbitant damage. To mitigate these security threats, basic embedded locking features are provided in smart devices. However, these locking features are vulnerable. In this paper, an original security-locking scheme using a rhythm-based locking system (RLS) is proposed to overcome the existing security problems of smart devices. RLS is a user-authenticated system that addresses vulnerability issues in the existing locking features and provides secure confidentiality in addition to convenience. PMID:25110743
Optical Computers and Space Technology
NASA Technical Reports Server (NTRS)
Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela
1995-01-01
The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.
An integrated system for land resources supervision based on the IoT and cloud computing
NASA Astrophysics Data System (ADS)
Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie
2017-01-01
Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.
NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report
NASA Technical Reports Server (NTRS)
Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ
2013-01-01
The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities
Framework for a clinical information system.
Van De Velde, R; Lansiers, R; Antonissen, G
2002-01-01
The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
Individualized radiotherapy by combining high-end irradiation and magnetic resonance imaging.
Combs, Stephanie E; Nüsslin, Fridtjof; Wilkens, Jan J
2016-04-01
Image-guided radiotherapy (IGRT) has been integrated into daily clinical routine and can today be considered the standard especially with high-dose radiotherapy. Currently imaging is based on MV- or kV-CT, which has clear limitations especially in soft-tissue contrast. Thus, combination of magnetic resonance (MR) imaging and high-end radiotherapy opens a new horizon. The intricate technical properties of MR imagers pose a challenge to technology when combined with radiation technology. Several solutions that are almost ready for routine clinical application have been developed. The clinical questions include dose-escalation strategies, monitoring of changes during treatment as well as imaging without additional radiation exposure during treatment.
Ring, Haim; Keren, Ofer; Zwecker, Manuel; Dynia, Aida
2007-10-01
With the development of computer technology and the high-tech electronic industry over the past 30 years, the technological age is flourishing. New technologies are continually being introduced, and questions regarding the economic viability of these technologies need to be addressed. To identify the medical technologies currently in use in different rehabilitation medicine settings in Israel. The TECHNO-R 2005 survey was conducted in two phases. Beginning in 2004, the first survey used a questionnaire with open questions relating to the different technologies in clinical use, including questions on their purpose, who operates the device (technician, physiotherapist, occupational therapist, physician, etc.), and a description of the treated patients. This questionnaire was sent to 31 rehabilitation medicine facilities in Israel. Due to difficulties in comprehension of the term "technology," a second revised standardized questionnaire with closed-ended questions specifying diverse technologies was introduced in 2005. The responder had to mark from a list of 15 different medical technologies which were in use in his or her facility, as well as their purpose, who operates the device, and a description of the treated patients. Transcutaneous electrical nerve stimulation, the TILT bed, continuous passive movement, and therapeutic ultrasound were the most widely used technologies in rehabilitation medicine facilities. Monitoring of the sitting position in the wheelchair, at the bottom of the list, was found to be the least used technology (with 15.4% occurrence). Most of the technologies are used primarily for treatment purposes and to a lesser degree for diagnosis and research. Our study poses a fundamental semantic and conceptual question regarding what kind of technologies are or should be part of the standard equipment of any accredited rehabilitation medicine facility for assessment, treatment and/or research. For this purpose, additional data are needed.
NASA Astrophysics Data System (ADS)
Valle, Fabio
The paper analyzes the satellite broadband systems for consumer from the perspective of technological innovation. The suggested interpretation relies upon such concepts as technological paradigm, technological trajectory and salient points. Satellite technology for broadband is a complex system on which each component (i.e. the satellite, the end-user equipment, the on-ground systems and related infrastructure) develops at different speed. Innovation in this industry concentrates recently on satellite space aircraft that seemed to be the component with the highest perceived opportunity for improvement. The industry has designed recently satellite systems with continuous dimensional increase of capacity available, suggesting that there is a technological trajectory in this area, similar to Moore’s law in the computer industry. The implications for industry players, Ka-band systems, and growth of future applications are also examined.
Digital multicolor printing: state of the art and future challenges
NASA Astrophysics Data System (ADS)
Kipphan, Helmut
1995-04-01
During the last 5 years, digital techniques have become extremely important in the graphic arts industry. All sections in the production flow for producing multicolor printed products - prepress, printing and postpress - are influenced by digitalization, in an evolutionary and revolutionary way. New equipment and network techniques bring all the sections closer together. The focus is put on high-quality multicolor printing, together with high productivity. Conventional offset printing technology is compared with the leading nonimpact printing technologies. Computer to press is contrasted with computer to print techniques. The newest available digital multicolor presses are described - the direct imaging offset printing press from HEIDELBERG with new laser imaging technique as well as the INDIGO and XEIKON presses based on electrophotography. Regarding technical specifications, economic calculations and print quality, it is worked out that each technique has its own market segments. An outlook is given for future computer to press techniques and the potential of nonimpact printing technologies for advanced high-speed multicolor computer to print equipment. Synergy effects from the NIP-technologies to the conventional printing technologies and vice versa are possible for building up innovative new products, for example hybrid printing systems. It is also shown that there is potential for improving the print quality, based on special screening algorithms, and a higher number of grey levels per pixel by using NIP-technologies. As an intermediate step in digitalization of the production flow, but also as an economical solution computer to plate equipment is described. By producing printed products totally in a digital way, digital color proofing as well as color management systems are needed. The newest high-tech equipment using NIP-technologies for producing proofs is explained. All in all it is shown that the state of the art in digital multicolor printing has reached a very high level in technology, productivity and quality, but that there is still space for improvements and innovations. Manufacturers of equipment and producers of printed products can take part in a successful evolution-changes, chances and challenges must be recognized and considered for future orientated activities and investments.
Finn, Jerry; Atkinson, Teresa
2009-11-01
The Technology Safety Project of the Washington State Coalition Against Domestic Violence was designed to increase awareness and knowledge of technology safety issues for domestic violence victims, survivors, and advocacy staff. The project used a "train-the-trainer" model and provided computer and Internet resources to domestic violence service providers to (a) increase safe computer and Internet access for domestic violence survivors in Washington, (b) reduce the risk posed by abusers by educating survivors about technology safety and privacy, and (c) increase the ability of survivors to help themselves and their children through information technology. Evaluation of the project suggests that the program is needed, useful, and effective. Consumer satisfaction was high, and there was perceived improvement in computer confidence and knowledge of computer safety. Areas for future program development and further research are discussed.
de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D
2004-03-01
Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.
Integrated instrumentation & computation environment for GRACE
NASA Astrophysics Data System (ADS)
Dhekne, P. S.
2002-03-01
The project GRACE (Gamma Ray Astrophysics with Coordinated Experiments) aims at setting up a state of the art Gamma Ray Observatory at Mt. Abu, Rajasthan for undertaking comprehensive scientific exploration over a wide spectral window (10's keV - 100's TeV) from a single location through 4 coordinated experiments. The cumulative data collection rate of all the telescopes is expected to be about 1 GB/hr, necessitating innovations in the data management environment. As real-time data acquisition and control as well as off-line data processing, analysis and visualization environment of these systems is based on the us cutting edge and affordable technologies in the field of computers, communications and Internet. We propose to provide a single, unified environment by seamless integration of instrumentation and computations by taking advantage of the recent advancements in Web based technologies. This new environment will allow researchers better acces to facilities, improve resource utilization and enhance collaborations by having identical environments for online as well as offline usage of this facility from any location. We present here a proposed implementation strategy for a platform independent web-based system that supplements automated functions with video-guided interactive and collaborative remote viewing, remote control through virtual instrumentation console, remote acquisition of telescope data, data analysis, data visualization and active imaging system. This end-to-end web-based solution will enhance collaboration among researchers at the national and international level for undertaking scientific studies, using the telescope systems of the GRACE project.
ERIC Educational Resources Information Center
Gibbs, Shirley; Steel, Gary; Kuiper, Alison
2011-01-01
The use of computers has become part of everyday life. The high prevalence of computer use appears to lead employers to assume that university graduates will have the good computing skills necessary in many graduate level jobs. This study investigates how well the expectations of employers match the perceptions of near-graduate students about the…
Robust telerobotics - an integrated system for waste handling, characterization and sorting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couture, S.A.; Hurd, R.L.; Wilhelmsen, K.C.
The Mixed Waste Management Facility (MWMF) at the Lawrence Livermore National Laboratory was designed to serve as a national testbed to demonstrate integrated technologies for the treatment of low-level organic mixed waste at a pilot-plant scale. Pilot-scale demonstration serves to bridge the gap between mature, bench-scale proven technologies and full-scale treatment facilities by providing the infrastructure needed to evaluate technologies in an integrated, front-end to back-end facility. Consistent with the intent to focus on technologies that are ready for pilot scale deployment, the front-end handling and feed preparation of incoming waste material has been designed to demonstrate the application ofmore » emerging robotic and remotely operated handling systems. The selection of telerobotics for remote handling in MWMF was made based on a number of factors - personnel protection, waste generation, maturity, cost, flexibility and extendibility. Telerobotics, or shared control of a manipulator by an operator and a computer, provides the flexibility needed to vary the amount of automation or operator intervention according to task complexity. As part of the telerobotics design effort, the technical risk of deploying the technology was reduced through focused developments and demonstrations. The work involved integrating key tools (1) to make a robust telerobotic system that operates at speeds and reliability levels acceptable to waste handling operators and, (2) to demonstrate an efficient operator interface that minimizes the amount of special training and skills needed by the operator. This paper describes the design and operation of the prototype telerobotic waste handling and sorting system that was developed for MWMF.« less
Benchmarking high performance computing architectures with CMS’ skeleton framework
Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.
2017-11-23
Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less
Benchmarking high performance computing architectures with CMS’ skeleton framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.
Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less
Aspects of Teamwork Observed in a Technological Task in Junior High Schools.
ERIC Educational Resources Information Center
Barak, Moshe; Maymon, Tsipora
1998-01-01
Teams of ninth-graders (n=172) in Israel designed and constructed models of hot-air balloons with tissue paper. The short, open-ended technological task promoted teamwork and high motivation. Most teams functioned without a leader. Teachers were challenged by the need to transfer autonomy and responsibility to students. (SK)
ERIC Educational Resources Information Center
Smith, Mark S.
2013-01-01
The purpose of this study was to determine the impact of interactive whiteboard technology on ninth grade English End of Course scores in two high schools in the Upstate of South Carolina in the school year 2011-2012. This study also sought to determine what impact interactive whiteboard technology had on the factors of gender, socio-economic…
Analysis of fractionation in corn-to-ethanol plants
NASA Astrophysics Data System (ADS)
Nelson, Camille
As the dry grind ethanol industry has grown, the research and technology surrounding ethanol production and co-product value has increased. Including use of back-end oil extraction and front-end fractionation. Front-end fractionation is pre-fermentation separation of the corn kernel into 3 fractions: endosperm, bran, and germ. The endosperm fraction enters the existing ethanol plant, and a high protein DDGS product remains after fermentation. High value oil is extracted out of the germ fraction. This leaves corn germ meal and bran as co-products from the other two streams. These 3 co-products have a very different composition than traditional corn DDGS. Installing this technology allows ethanol plants to increase profitability by tapping into more diverse markets, and ultimately could allow for an increase in profitability. An ethanol plant model was developed to evaluate both back-end oil extraction and front-end fractionation technology and predict the change in co-products based on technology installed. The model runs in Microsoft Excel and requires inputs of whole corn composition (proximate analysis), amino acid content, and weight to predict the co-product quantity and quality. User inputs include saccharification and fermentation efficiencies, plant capacity, and plant process specifications including front-end fractionation and backend oil extraction, if applicable. This model provides plants a way to assess and monitor variability in co-product composition due to the variation in whole corn composition. Additionally the co-products predicted in this model are entered into the US Pork Center of Excellence, National Swine Nutrition Guide feed formulation software. This allows the plant user and animal nutritionists to evaluate the value of new co-products in existing animal diets.
Simulation training tools for nonlethal weapons using gaming environments
NASA Astrophysics Data System (ADS)
Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry
2006-05-01
Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.
Computer Programmed Milling Machine Operations. High-Technology Training Module.
ERIC Educational Resources Information Center
Leonard, Dennis
This learning module for a high school metals and manufacturing course is designed to introduce the concept of computer-assisted machining (CAM). Through it, students learn how to set up and put data into the controller to machine a part. They also become familiar with computer-aided manufacturing and learn the advantages of computer numerical…
Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra
2011-01-01
Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.
From 'automation' to 'autonomy': the importance of trust repair in human-machine interaction.
de Visser, Ewart J; Pak, Richard; Shaw, Tyler H
2018-04-09
Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationships with autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centred approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user's needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems. Practitioner Summary: This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
NASA Astrophysics Data System (ADS)
Nikzad, Shouleh; Jewell, April D.; Hoenk, Michael E.; Jones, Todd J.; Hennessy, John; Goodsall, Tim; Carver, Alexander G.; Shapiro, Charles; Cheng, Samuel R.; Hamden, Erika T.; Kyne, Gillian; Martin, D. Christopher; Schiminovich, David; Scowen, Paul; France, Kevin; McCandliss, Stephan; Lupu, Roxana E.
2017-07-01
Exciting concepts are under development for flagship, probe class, explorer class, and suborbital class NASA missions in the ultraviolet/optical spectral range. These missions will depend on high-performance silicon detector arrays being delivered affordably and in high numbers. To that end, we have advanced delta-doping technology to high-throughput and high-yield wafer-scale processing, encompassing a multitude of state-of-the-art silicon-based detector formats and designs. We have embarked on a number of field observations, instrument integrations, and independent evaluations of delta-doped arrays. We present recent data and innovations from JPL's Advanced Detectors and Systems Program, including two-dimensional doping technology, JPL's end-to-end postfabrication processing of high-performance UV/optical/NIR arrays and advanced coatings for detectors. While this paper is primarily intended to provide an overview of past work, developments are identified and discussed throughout. Additionally, we present examples of past, in-progress, and planned observations and deployments of delta-doped arrays.
ERIC Educational Resources Information Center
Draper, Thomas W.; And Others
This paper introduces and develops the premise that technology should be used as a tool to be adapted to early childhood education rather than adapting the preschool curriculum to computers. Although recent evidence suggests a national interest in having high technology play a role in the teaching of young children, particularly in reading,…