A Research Program in Computer Technology. 1982 Annual Technical Report
1983-03-01
for the Defense Advanced Research Projects Agency. The research applies computer science and technology to areas of high DoD/ military impact. The ISI...implement the plan; New Computing Environment - investigation and adaptation of developing computer technologies to serve the research and military ...Computing Environment - ,.*_i;.;"’.)n and adaptation of developing computer technologies to serve the research and military tser communities; and Computer
Computer Access. Tech Use Guide: Using Computer Technology.
ERIC Educational Resources Information Center
Council for Exceptional Children, Reston, VA. Center for Special Education Technology.
One of nine brief guides for special educators on using computer technology, this guide focuses on access including adaptations in input devices, output devices, and computer interfaces. Low technology devices include "no-technology" devices (usually modifications to existing devices), simple switches, and multiple switches. High technology input…
Information Technology and Literacy Assessment.
ERIC Educational Resources Information Center
Balajthy, Ernest
2002-01-01
Compares technology predictions from around 1989 with the technology of 2002. Discusses the place of computer-based assessment today, computer-scored testing, computer-administered formal assessment, Internet-based formal assessment, computerized adaptive tests, placement tests, informal assessment, electronic portfolios, information management,…
ERIC Educational Resources Information Center
Draper, Thomas W.; And Others
This paper introduces and develops the premise that technology should be used as a tool to be adapted to early childhood education rather than adapting the preschool curriculum to computers. Although recent evidence suggests a national interest in having high technology play a role in the teaching of young children, particularly in reading,…
Adaptive Technologies for Accommodating Persons with Disabilities.
ERIC Educational Resources Information Center
Berliss, Jane; And Others
1993-01-01
Eight articles review the progress achieved in making library computing technologies and library services accessible to people with disabilities. Adaptive technologies, automated conversion into Braille, and successful programs that demonstrate compliance with the American with Disabilities Act are described. A resource list is included. (EA)
Adaptive Technologies for Training and Education
ERIC Educational Resources Information Center
Durlach, Paula J., Ed; Lesgold, Alan M., Ed.
2012-01-01
This edited volume provides an overview of the latest advancements in adaptive training technology. Intelligent tutoring has been deployed for well-defined and relatively static educational domains such as algebra and geometry. However, this adaptive approach to computer-based training has yet to come into wider usage for domains that are less…
ERIC Educational Resources Information Center
Lazzaro, Joseph J.
1993-01-01
Describes adaptive technology for personal computers that accommodate disabled users and may require special equipment including hardware, memory, expansion slots, and ports. Highlights include vision aids, including speech synthesizers, magnification, braille, and optical character recognition (OCR); hearing adaptations; motor-impaired…
Medical education and information and communication technology.
Houshyari, Asefeh Badiey; Bahadorani, Mahnaz; Tootoonchi, Mina; Gardiner, John Jacob Zucker; Peña, Roberto A; Adibi, Peyman
2012-01-01
Information and communication technology (ICT) has brought many changes in medical education and practice in the last couple of decades. Teaching and learning medicine particularly has gone under profound changes due to computer technologies, and medical schools around the world have invested heavily either in new computer technologies or in the process of adapting to this technological revolution. In order to catch up with the rest of the world, developing countries need to research their options in adapting to new computer technologies. This descriptive survey study was designed to assess medical students' computer and Internet skills and their attitude toward ICT. Research findings showed that the mean score of self-perceived computer knowledge for male students in general was greater than for female students. Also, students who had participated in various prior computer workshops, had access to computer, Internet, and e-mail, and frequently checked their e-mail had higher mean of self-perceived knowledge and skill score. Finally, students with positive attitude toward ICT scored their computer knowledge higher than those who had no opinion. The results have confirmed that the medical schools, particularly in developing countries, need to bring fundamental changes such as curriculum modification in order to integrate ICT into medical education, creating essential infrastructure for ICT use in medical education and practice, and structured computer training for faculty and students.
An Adaptive Testing System for Supporting Versatile Educational Assessment
ERIC Educational Resources Information Center
Huang, Yueh-Min; Lin, Yen-Ting; Cheng, Shu-Chen
2009-01-01
With the rapid growth of computer and mobile technology, it is a challenge to integrate computer based test (CBT) with mobile learning (m-learning) especially for formative assessment and self-assessment. In terms of self-assessment, computer adaptive test (CAT) is a proper way to enable students to evaluate themselves. In CAT, students are…
Ubiquitous Computing Technologies in Education
ERIC Educational Resources Information Center
Hwang, Gwo-Jen; Wu, Ting-Ting; Chen, Yen-Jung
2007-01-01
The prosperous development of wireless communication and sensor technologies has attracted the attention of researchers from both computer and education fields. Various investigations have been made for applying the new technologies to education purposes, such that more active and adaptive learning activities can be conducted in the real world.…
The Technology Fix: The Promise and Reality of Computers in Our Schools
ERIC Educational Resources Information Center
Pflaum, William D.
2004-01-01
During the technology boom of the 1980s and 1990s, computers seemed set to revolutionize education. Do any of these promises sound familiar? (1) Technology would help all students learn better, thanks to multimedia programs capable of adapting to individual needs, learning styles, and skill levels; (2) Technology would transform the teacher's role…
Design of an LVDS to USB3.0 adapter and application
NASA Astrophysics Data System (ADS)
Qiu, Xiaohan; Wang, Yu; Zhao, Xin; Chang, Zhen; Zhang, Quan; Tian, Yuze; Zhang, Yunyi; Lin, Fang; Liu, Wenqing
2016-10-01
USB 3.0 specification was published in 2008. With the development of technology, USB 3.0 is becoming popular. LVDS(Low Voltage Differential Signaling) to USB 3.0 Adapter connects the communication port of spectrometer device and the USB 3.0 port of a computer, and converts the output of an LVDS spectrometer device data to USB. In order to adapt to the changing and developing of technology, LVDS to USB3.0 Adapter was designed and developed based on LVDS to USB2.0 Adapter. The CYUSB3014, a new generation of USB bus interface chip produced by Cypress and conforming to USB3.0 communication protocol, utilizes GPIF-II (GPIF, general programmable interface) to connect the FPGA and increases effective communication speed to 2Gbps. Therefore, the adapter, based on USB3.0 technology, is able to connect more spectrometers to single computer and provides technical basis for the development of the higher speed industrial camera. This article describes the design and development process of the LVDS to USB3.0 adapter.
Adaptive Technology that Provides Access to Computers. DO-IT Program.
ERIC Educational Resources Information Center
Washington Univ., Seattle.
This brochure describes the different types of barriers individuals with mobility impairments, blindness, low vision, hearing impairments, and specific learning disabilities face in providing computer input, interpreting output, and reading documentation. The adaptive hardware and software that has been developed to provide functional alternatives…
ERIC Educational Resources Information Center
Lee, Cynthia; Yeung, Alexander Seeshing; Ip, Tiffany
2016-01-01
Computer technology provides spaces and locales for language learning. However, learning style preference and demographic variables may affect the effectiveness of technology use for a desired goal. Adapting Reid's pioneering Perceptual Learning Style Preference Questionnaire (PLSPQ), this study investigated the relations of university students'…
Assessing the Decision Process towards Bring Your Own Device
ERIC Educational Resources Information Center
Koester, Richard F.
2017-01-01
Information technology continues to evolve to the point where mobile technologies--such as smart phones, tablets, and ultra-mobile computers have the embedded flexibility and power to be a ubiquitous platform to fulfill the entire user's computing needs. Mobile technology users view these platforms as adaptable enough to be the single solution for…
Advances in Adaptive Control Methods
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2009-01-01
This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.
First benchmark of the Unstructured Grid Adaptation Working Group
NASA Technical Reports Server (NTRS)
Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike
2017-01-01
Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.
Access to the Arts through Assistive Technology.
ERIC Educational Resources Information Center
Frame, Charles
Personnel in the rehabilitation field have come to recognize the possibilities and implications of computers as assistive technology for disabled persons. This manual provides information on how to adapt the Unicorn Board, Touch Talker/Light Talker overlays, the Adaptive Firmware Card setup disk, and Trace-Transparent Access Module (T-TAM) to…
Adaptive Social Learning Based on Crowdsourcing
ERIC Educational Resources Information Center
Karataev, Evgeny; Zadorozhny, Vladimir
2017-01-01
Many techniques have been developed to enhance learning experience with computer technology. A particularly great influence of technology on learning came with the emergence of the web and adaptive educational hypermedia systems. While the web enables users to interact and collaborate with each other to create, organize, and share knowledge via…
PERSO: Towards an Adaptive e-Learning System
ERIC Educational Resources Information Center
Chorfi, Henda; Jemni, Mohamed
2004-01-01
In today's information technology society, members are increasingly required to be up to date on new technologies, particularly for computers, regardless of their background social situation. In this context, our aim is to design and develop an adaptive hypermedia e-learning system, called PERSO (PERSOnalizing e-learning system), where learners…
NASA Technical Reports Server (NTRS)
Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.
2016-01-01
Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.
Computer Technology for Industry
NASA Technical Reports Server (NTRS)
1979-01-01
In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.
Advances and trends in computational structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1986-01-01
Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.
Adaptive and perceptual learning technologies in medical education and training.
Kellman, Philip J
2013-10-01
Recent advances in the learning sciences offer remarkable potential to improve medical education and maximize the benefits of emerging medical technologies. This article describes 2 major innovation areas in the learning sciences that apply to simulation and other aspects of medical learning: Perceptual learning (PL) and adaptive learning technologies. PL technology offers, for the first time, systematic, computer-based methods for teaching pattern recognition, structural intuition, transfer, and fluency. Synergistic with PL are new adaptive learning technologies that optimize learning for each individual, embed objective assessment, and implement mastery criteria. The author describes the Adaptive Response-Time-based Sequencing (ARTS) system, which uses each learner's accuracy and speed in interactive learning to guide spacing, sequencing, and mastery. In recent efforts, these new technologies have been applied in medical learning contexts, including adaptive learning modules for initial medical diagnosis and perceptual/adaptive learning modules (PALMs) in dermatology, histology, and radiology. Results of all these efforts indicate the remarkable potential of perceptual and adaptive learning technologies, individually and in combination, to improve learning in a variety of medical domains. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
E-Assessment Adaptation at a Military Vocational College: Student Perceptions
ERIC Educational Resources Information Center
Cigdem, Harun; Oncu, Semiral
2015-01-01
This survey study examines an assessment methodology through e-quizzes administered at a military vocational college and subsequent student perceptions in spring 2013 at the "Computer Networks" course. A total of 30 Computer Technologies and 261 Electronic and Communication Technologies students took three e-quizzes. Data were gathered…
CALL Essentials: Principles and Practice in CALL Classrooms
ERIC Educational Resources Information Center
Egbert, Joy
2005-01-01
Computers and the Internet offer innovative teachers exciting ways to enhance their pedagogy and capture their students' attention. These technologies have created a growing field of inquiry, computer-assisted language learning (CALL). As new technologies have emerged, teaching professionals have adapted them to support teachers and learners in…
A New Look at NASA: Strategic Research In Information Technology
NASA Technical Reports Server (NTRS)
Alfano, David; Tu, Eugene (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.
Reconfigurable environmentally adaptive computing
NASA Technical Reports Server (NTRS)
Coxe, Robin L. (Inventor); Galica, Gary E. (Inventor)
2008-01-01
Described are methods and apparatus, including computer program products, for reconfigurable environmentally adaptive computing technology. An environmental signal representative of an external environmental condition is received. A processing configuration is automatically selected, based on the environmental signal, from a plurality of processing configurations. A reconfigurable processing element is reconfigured to operate according to the selected processing configuration. In some examples, the environmental condition is detected and the environmental signal is generated based on the detected condition.
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Adapting to a Computer-Oriented Society: The Leadership Role of Business and Liberal Arts Faculties.
ERIC Educational Resources Information Center
O'Gorman, David E.
The need for higher education to take a proactive rather than a reactive stance in dealing with the impact of the computer is considered. The field of computerized video technology is briefly discussed. It is suggested that disparate groups such as the liberal arts and business faculties should cooperate to maximize the use of computer technology.…
Senior Adults and Computers in the 1990s.
ERIC Educational Resources Information Center
Lawhon, Tommie; And Others
1996-01-01
Older adults use computers for entertainment, education, and creative and business endeavors. Computer training helps them increase productivity, learn skills, and boost short-term memory. Electronic mail, online services, and the Internet encourage socialization. Adapted technology helps disabled and ill elders use computers. (SK)
ERIC Educational Resources Information Center
Severs, Mary K.
The Educational Center for Disabled Students at the University of Nebraska-Lincoln is designed to improve the academic performance and attitudes toward success of disabled students through computer technology and academic skills training. Adaptive equipment interventions take into account keyboard access and screen and voice output. Non-adaptive…
Interdisciplinary Facilities that Support Collaborative Teaching and Learning
ERIC Educational Resources Information Center
Asoodeh, Mike; Bonnette, Roy
2006-01-01
It has become widely accepted that the computer is an indispensable tool in the study of science and technology. Thus, in recent years curricular programs such as Industrial Technology and associated scientific disciplines have been adopting and adapting the computer as a tool in new and innovative ways to support teaching, learning, and research.…
Computer Software for Forestry Technology Curricula. Final Report.
ERIC Educational Resources Information Center
Watson, Roy C.; Scobie, Walter R.
Since microcomputers are being used more and more frequently in the forest products industry in the Pacific Northwest, Green River Community College conducted a project to search for BASIC language computer programs pertaining to forestry, and when possible, to adapt such software for use in teaching forestry technology. The search for applicable…
Using Neural Net Technology To Enhance the Efficiency of a Computer Adaptive Testing Application.
ERIC Educational Resources Information Center
Van Nelson, C.; Henriksen, Larry W.
The potential for computer adaptive testing (CAT) has been well documented. In order to improve the efficiency of this process, it may be possible to utilize a neural network, or more specifically, a back propagation neural network. The paper asserts that in order to accomplish this end, it must be shown that grouping examinees by ability as…
ERIC Educational Resources Information Center
Keller-Margulis, Milena; McQuillin, Samuel D.; Castañeda, Juan Javier; Ochs, Sarah; Jones, John H.
2018-01-01
Multitiered systems of support depend on screening technology to identify students at risk. The purpose of this study was to examine the use of a computer-adaptive test and latent class growth analysis (LCGA) to identify students at risk in reading with focus on the use of this methodology to characterize student performance in screening.…
12th Annual Science and Engineering Technology Conference/DoD TECH Exposition
2011-06-23
compound when planning horizons grow: long design - test - build-field-adapt lead-times exacerbate uncertain futures problems, overload designs , and...ERS Environment ERS: Tools and Technologies to Facilitate Adaptability & Trustability 4. Tying design , physical and computational testing 6...science, engineering concepts, processes, and design tools to: • Continuously coordinate design , testing , and production with warfighter review to
Adapting Teaching Strategies To Encompass New Technologies.
ERIC Educational Resources Information Center
Oravec, Jo Ann
2001-01-01
The explosion of special-purpose computing devices--Internet appliances, handheld computers, wireless Internet, networked household appliances--challenges business educators attempting to provide computer literacy education. At a minimum, they should address connectivity, expanded applications, and social and public policy implications of these…
Adaptation of XMM-Newton SAS to GRID and VO architectures via web
NASA Astrophysics Data System (ADS)
Ibarra, A.; de La Calle, I.; Gabriel, C.; Salgado, J.; Osuna, P.
2008-10-01
The XMM-Newton Scientific Analysis Software (SAS) is a robust software that has allowed users to produce good scientific results since the beginning of the mission. This has been possible given the SAS capability to evolve with the advent of new technologies and adapt to the needs of the scientific community. The prototype of the Remote Interface for Science Analysis (RISA) presented here, is one such example, which provides remote analysis of XMM-Newton data with access to all the existing SAS functionality, while making use of GRID computing technology. This new technology has recently emerged within the astrophysical community to tackle the ever lasting problem of computer power for the reduction of large amounts of data.
Advanced processing for high-bandwidth sensor systems
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.
2000-11-01
Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.
Access: Exceptional Children and Technology.
ERIC Educational Resources Information Center
North Carolina State Dept. of Public Instruction, Raleigh. Div. for Exceptional Children.
The Exceptional Children and New Technology project sought to meet the instructional needs of physically handicapped, emotionally disturbed, learning disabled, and mentally handicapped children through the use of computer technology. The goals of the project were to test the instructional value of adaptive/assistive devices with exceptional…
Technology Needs for Teachers Web Development and Curriculum Adaptations
NASA Technical Reports Server (NTRS)
Carroll, Christy J.
1999-01-01
Computer-based mathematics and science curricula focusing on NASA inventions and technologies will enhance current teacher knowledge and skills. Materials and interactive software developed by educators will allow students to integrate their various courses, to work cooperatively, and to collaborate with both NASA scientists and students at other locations by using computer networks, email and the World Wide Web.
Patterns of computer usage among medical practitioners in rural and remote Queensland.
White, Col; Sheedy, Vicki; Lawrence, Nicola
2002-06-01
As part of a more detailed needs analysis, patterns of computer usage among medical practitioners in rural and remote Queensland were investigated. Utilising a questionnaire approach, a response rate of 23.82% (n = 131) was obtained. Results suggest that medical practitioners in rural and remote Queensland are relatively sophisticated in their use of computer and information technologies and have embraced computerisation to a substantially higher extent compared with their urban counterparts and previously published estimates. Findings also indicate that a substantial number of rural and remote practitioners are utilising computer and information technologies for clinical purposes such as pathology, patient information sheets, prescribing, education, patient records and patient recalls. Despite barriers such as bandwidth limitations, cost and the sometimes unreliable quality of Internet service providers, a majority of rural and remote respondents rated an Internet site with continuing medical education information and services as being important or very important. Suggestions that "rural doctors are slow to adapt to new technologies" are questioned, with findings indicating that rural and remote medical practitioners in Queensland have adapted to, and utilise, information technology to a far higher extent than has been previously documented.
The Construction of Knowledge through Social Interaction via Computer-Mediated Communication
ERIC Educational Resources Information Center
Saritas, Tuncay
2008-01-01
With the advance in information and communication technologies, computer-mediated communication--more specifically computer conferencing systems (CCS)--has captured the interest of educators as an ideal tool to create a learning environment featuring active, participative, and reflective learning. Educators are increasingly adapting the features…
Scolozzi, Paolo; Herzog, Georges
2017-07-01
We are reporting the treatment of severe maxillary hypoplasia in two patients with unilateral cleft lip and palate by using a specific approach combining the Le Fort I distraction osteogenesis technique coupled with computer-aided design/computer-aided manufacturing customized surgical guides and internal distractors based on virtual computational planning. This technology allows for the transfer of the virtual planned reconstruction to the operating room by using custom patient-specific implants, surgical splints, surgical cutting guides, and surgical guides to plate or distractor adaptation.
Drajsajtl, Tomáš; Struk, Petr; Bednárová, Alice
2013-01-01
AsTeRICS - "The Assistive Technology Rapid Integration & Construction Set" is a construction set for assistive technologies which can be adapted to the motor abilities of end-users. AsTeRICS allows access to different devices such as PCs, cell phones and smart home devices, with all of them integrated in a platform adapted as much as possible to each user. People with motor disabilities in the upper limbs, with no cognitive impairment, no perceptual limitations (neither visual nor auditory) and with basic skills in using technologies such as PCs, cell phones, electronic agendas, etc. have available a flexible and adaptable technology which enables them to access the Human-Machine-Interfaces (HMI) on the standard desktop and beyond. AsTeRICS provides graphical model design tools, a middleware and hardware support for the creation of tailored AT-solutions involving bioelectric signal acquisition, Brain-/Neural Computer Interfaces, Computer-Vision techniques and standardized actuator and device controls and allows combining several off-the-shelf AT-devices in every desired combination. Novel, end-user ready solutions can be created and adapted via a graphical editor without additional programming efforts. The AsTeRICS open-source framework provides resources for utilization and extension of the system to developers and researches. AsTeRICS was developed by the AsTeRICS project and was partially funded by EC.
ERIC Educational Resources Information Center
Lamal, Pauline Dove
Art has always adapted technological advances to its own uses. In the last 15 years, art has turned to color photocopiers, computers, mimeograph machines, and thermofax copiers. With this in mind, Central Piedmont Community College began offering a course in 1982 called "Art and Technology" which focused on the application of office…
Pedagogical Approaches for Technology-Integrated Science Teaching
ERIC Educational Resources Information Center
Hennessy, Sara; Wishart, Jocelyn; Whitelock, Denise; Deaney, Rosemary; Brawn, Richard; la Velle, Linda; McFarlane, Angela; Ruthven, Kenneth; Winterbottom, Mark
2007-01-01
The two separate projects described have examined how teachers exploit computer-based technologies in supporting learning of science at secondary level. This paper examines how pedagogical approaches associated with these technological tools are adapted to both the cognitive and structuring resources available in the classroom setting. Four…
Use of technology for educating melanoma patients.
Marble, Nicole; Loescher, Lois J; Lim, Kyung Hee; Hiscox, Heather
2010-09-01
We evaluated the feasibility of using technology for melanoma patient education in a clinic setting. We assessed technology skill level and preferences for education. Data were collected using an adapted version of the Use of Technology Survey. Most participants owned a computer and DVD player and were skilled in the use of these devices, along with Internet and e-mail. Participants preferred the option of using in-clinic and at-home technology versus in-clinic only use. Computer and DVD applications were preferred because they were familiar and convenient. Using technology for patient education intervention is a viable option; however, patients' skill level and preferences for technology should be considered.
Evaluation of electrosurgical interference to low-power spread-spectrum local area net transceivers.
Gibby, G L; Schwab, W K; Miller, W C
1997-11-01
To study whether an electrosurgery device interferes with the operation of a low-power spread-spectrum wireless network adapter. Nonrandomized, unblinded trials with controls, conducted in the corridor of our institution's operating suite using two portable computers equipped with RoamAbout omnidirectional 250 mW spread-spectrum 928 MHz wireless network adapters. To simulate high power electrosurgery interference, a 100-watt continuous electrocoagulation arc was maintained five feet from the receiving adapter, while device reported signal to noise values were measured at 150 feet and 400 feet distance between the wireless-networked computers. At 150 feet range, and with continuous 100-watt electrocoagulation arc five feet from one computer, error-corrected local area net throughput was measured by sending and receiving a large file multiple times. The reported signal to noise (N = 50) decreased with electrocoagulation from 36.42+/-3.47 (control) to 31.85+/-3.64 (electrocoagulation) (p < 0.001) at 400 feet inter-adapter distance, and from 64.53+/-1.43 (control) to 60.12+/-3.77 (electrocoagulation) (p < 0.001) at 150 feet inter-adapter distance. There was no statistically significant change in network throughput (average 93 kbyte/second) at 150 feet inter-adapter distance, either transmitting or receiving during continuous 100 Watt electrocoagulation arc. The manufacturer indicates "acceptable" performance will be obtained with signal to noise values as low as 20. In view of this, while electrocoagulation affects this spread spectrum network adapter, the effects are small even at 400 feet. At a distance of 150 feet, no discernible effect on network communications was found, suggesting that if other obstructions are minimal, within a wide range on one floor of an operating suite, network communications may be maintained using the technology of this wireless spread spectrum network adapter. The impact of such adapters on cardiac pacemakers should be studied. Wireless spread spectrum network adapters are an attractive technology for mobile computer communications in the operating room.
Reshaping Computer Literacy Teaching in Higher Education: Identification of Critical Success Factors
ERIC Educational Resources Information Center
Taylor, Estelle; Goede, Roelien; Steyn, Tjaart
2011-01-01
Purpose: Acquiring computer skills is more important today than ever before, especially in a developing country. Teaching of computer skills, however, has to adapt to new technology. This paper aims to model factors influencing the success of the learning of computer literacy by means of an e-learning environment. The research question for this…
ERIC Educational Resources Information Center
Pommerich, Mary
2007-01-01
Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can result in examinee scores that are artificially inflated or deflated. As such, researchers…
Developing Emotion-Aware, Advanced Learning Technologies: A Taxonomy of Approaches and Features
ERIC Educational Resources Information Center
Harley, Jason M.; Lajoie, Susanne P.; Frasson, Claude; Hall, Nathan C.
2017-01-01
A growing body of work on intelligent tutoring systems, affective computing, and artificial intelligence in education is exploring creative, technology-driven approaches to enhance learners' experience of adaptive, positively-valenced emotions while interacting with advanced learning technologies. Despite this, there has been no published work to…
Using Multi-Core Systems for Rover Autonomy
NASA Technical Reports Server (NTRS)
Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.
2010-01-01
Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality
Real-time control system for adaptive resonator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flath, L; An, J; Brase, J
2000-07-24
Sustained operation of high average power solid-state lasers currently requires an adaptive resonator to produce the optimal beam quality. We describe the architecture of a real-time adaptive control system for correcting intra-cavity aberrations in a heat capacity laser. Image data collected from a wavefront sensor are processed and used to control phase with a high-spatial-resolution deformable mirror. Our controller takes advantage of recent developments in low-cost, high-performance processor technology. A desktop-based computational engine and object-oriented software architecture replaces the high-cost rack-mount embedded computers of previous systems.
A perspective of adaptation in healthcare.
Mezghani, Emna; Da Silveira, Marcos; Pruski, Cédric; Exposito, Ernesto; Drira, Khalil
2014-01-01
Emerging new technologies in healthcare has proven great promises for managing patient care. In recent years, the evolution of Information and Communication Technologies pushes many research studies to think about treatment plan adaptation in this area. The main goal is to accelerate the decision making by dynamically generating new treatment due to unexpected situations. This paper portrays the treatment adaptation from a new perspective inspired from the human nervous system named autonomic computing. Thus, the selected potential studies are classified according to the maturity levels of this paradigm. To guarantee optimal and accurate treatment adaptation, challenges related to medical knowledge and data are identified and future directions to be explored in healthcare systems are discussed.
Using Intelligent Tutor Technology to Implement Adaptive Support for Student Collaboration
ERIC Educational Resources Information Center
Diziol, Dejana; Walker, Erin; Rummel, Nikol; Koedinger, Kenneth R.
2010-01-01
Research on computer-supported collaborative learning has shown that students need support to benefit from collaborative activities. While classical collaboration scripts have been effective in providing such support, they have also been criticized for being coercive and not allowing students to self-regulate their learning. Adaptive collaboration…
Computerized Adaptive Testing: Some Issues in Development.
ERIC Educational Resources Information Center
Orcutt, Venetia L.
The emergence of enhanced capabilities in computer technology coupled with the growing body of knowledge regarding item response theory has resulted in the expansion of computerized adaptive test (CAT) utilization in a variety of venues. Newcomers to the field need a more thorough understanding of item response theory (IRT) principles, their…
End-to-end plasma bubble PIC simulations on GPUs
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava
2017-10-01
Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.
Multiple Auto-Adapting Color Balancing for Large Number of Images
NASA Astrophysics Data System (ADS)
Zhou, X.
2015-04-01
This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
Developing Intranets: Practical Issues for Implementation and Design.
ERIC Educational Resources Information Center
Trowbridge, Dave
1996-01-01
An intranet is a system which has "domesticated" the technologies of the Internet for specific organizational settings and goals. Although the adaptability of Hypertext Markup Language to intranets is sometimes limited, implementing various protocols and technologies enable organizations to share files among heterogeneous computers,…
Distance Learning and the Information Highway [and] Comments on Burgstahler.
ERIC Educational Resources Information Center
Burgstahler, Sheryl; And Others
1995-01-01
Two University of Washington programs that demonstrate the use of the Internet for distance learning in rehabilitation are described: Project DO-IT (Disabilities, Opportunities, Internetworking, and Technology) and an adaptive computer technology course. Comments by Oestreich and Adams follow the article. (SK)
NASA Technical Reports Server (NTRS)
Ross, Muriel D.
1991-01-01
The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.
Techniques for grid manipulation and adaptation. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.
1992-01-01
Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.
NASA Astrophysics Data System (ADS)
Chen, Su Shing; Caulfield, H. John
1994-03-01
Adaptive Computing, vs. Classical Computing, is emerging to be a field which is the culmination during the last 40 and more years of various scientific and technological areas, including cybernetics, neural networks, pattern recognition networks, learning machines, selfreproducing automata, genetic algorithms, fuzzy logics, probabilistic logics, chaos, electronics, optics, and quantum devices. This volume of "Critical Reviews on Adaptive Computing: Mathematics, Electronics, and Optics" is intended as a synergistic approach to this emerging field. There are many researchers in these areas working on important results. However, we have not seen a general effort to summarize and synthesize these results in theory as well as implementation. In order to reach a higher level of synergism, we propose Adaptive Computing as the field which comprises of the above mentioned computational paradigms and various realizations. The field should include both the Theory (or Mathematics) and the Implementation. Our emphasis is on the interplay of Theory and Implementation. The interplay, an adaptive process itself, of Theory and Implementation is the only "holistic" way to advance our understanding and realization of brain-like computation. We feel that a theory without implementation has the tendency to become unrealistic and "out-of-touch" with reality, while an implementation without theory runs the risk to be superficial and obsolete.
Different Futures of Adaptive Collaborative Learning Support
ERIC Educational Resources Information Center
Rummel, Nikol; Walker, Erin; Aleven, Vincent
2016-01-01
In this position paper we contrast a Dystopian view of the future of adaptive collaborative learning support (ACLS) with a Utopian scenario that--due to better-designed technology, grounded in research--avoids the pitfalls of the Dystopian version and paints a positive picture of the practice of computer-supported collaborative learning 25 years…
Adaptive wall technology for minimization of wall interferences in transonic wind tunnels
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.
1988-01-01
Modern experimental techniques to improve free air simulations in transonic wind tunnels by use of adaptive wall technology are reviewed. Considered are the significant advantages of adaptive wall testing techniques with respect to wall interferences, Reynolds number, tunnel drive power, and flow quality. The application of these testing techniques relies on making the test section boundaries adjustable and using a rapid wall adjustment procedure. A historical overview shows how the disjointed development of these testing techniques, since 1938, is closely linked to available computer support. An overview of Adaptive Wall Test Section (AWTS) designs shows a preference for use of relatively simple designs with solid adaptive walls in 2- and 3-D testing. Operational aspects of AWTS's are discussed with regard to production type operation where adaptive wall adjustments need to be quick. Both 2- and 3-D data are presented to illustrate the quality of AWTS data over the transonic speed range. Adaptive wall technology is available for general use in 2-D testing, even in cryogenic wind tunnels. In 3-D testing, more refinement of the adaptive wall testing techniques is required before more widespread use can be planned.
ERIC Educational Resources Information Center
Swanquist, Barry
1998-01-01
Discusses how today's technology is encouraging schools to invest in furnishings that are adaptable to computer use and telecommunications access. Explores issues concerning modularity, wiring management, ergonomics, durability, price, and aesthetics. (GR)
Liu, Yushu; Ye, Hongqiang; Wang, Yong; Zhao, Yijao; Sun, Yuchun; Zhou, Yongsheng
2018-05-17
To evaluate the internal adaptations of cast crowns made from resin patterns produced using three different computer-aided design/computer-assisted manufacturing technologies. A full-crown abutment made of zirconia was digitized using an intraoral scanner, and the design of the crown was finished on the digital model. Resin patterns were fabricated using a fused deposition modeling (FDM) 3D printer (LT group), a digital light projection (DLP) 3D printer (EV group), or a five-axis milling machine (ZT group). All patterns were cast in cobalt-chromium alloy crowns. Crowns made from traditional handmade wax patterns (HM group) were used as controls. Each group contained 10 samples. The internal gaps of the patterns were analyzed using a 3D replica method and optical digitization. The results were compared using Kruskal-Wallis analysis of variance (ANOVA), a one-sample t test, and signed rank test (α = .05). For the LT group, the marginal and axial gaps were significantly larger than in the other three groups (P < .05), but the occlusal adaptation did not reveal a significant difference (P > .05). In the ZT group, the axial gap was slightly smaller than in the HM group (P < .0083). All the means of gaps in all areas in the four groups were less than 150 μm. Casting crowns using casting patterns made from all three CAD/CAM systems could not produce the prescribed parameters, but the crowns showed clinically acceptable internal adaptations.
ERIC Educational Resources Information Center
Zadahmad, Manouchehr; Yousefzadehfard, Parisa
2016-01-01
Mobile Cloud Computing (MCC) aims to improve all mobile applications such as m-learning systems. This study presents an innovative method to use web technology and software engineering's best practices to provide m-learning functionalities hosted in a MCC-learning system as service. Components hosted by MCC are used to empower developers to create…
Assistive Technology Developments in Puerto Rico.
ERIC Educational Resources Information Center
Lizama, Mauricio A.; Mendez, Hector L.
Recent efforts to develop Spanish-based adaptations for alternate computer input devices are considered, as are their implications for Hispanics with disabilities and for the development of language sensitive devices worldwide. Emphasis is placed on the particular need to develop low-cost high technology devices for Puerto Rico and Latin America…
Unstructured mesh adaptivity for urban flooding modelling
NASA Astrophysics Data System (ADS)
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
USSR Report, Cybernetics, Computers and Automation Technology
1987-03-31
version of the system was tested by adapting PAL-11 and MACRO-11 assembly code for the "Elektronika=60" and "Elektronika-60M" computers; ASM -86 for the...GS, "On the Results of Evaluation of Insurance Payments in Collective and State Farms and Private Households," the actuarial analysis tables based
Chang, Ching-I; Yan, Huey-Yeu; Sung, Wen-Hsu; Shen, Shu-Cheng; Chuang, Pao-Yu
2006-01-01
The purpose of this research was to develop a computer-aided instruction system for intra-aortic balloon pumping (IABP) skills in clinical nursing with virtual instrument (VI) concepts. Computer graphic technologies were incorporated to provide not only static clinical nursing education, but also the simulated function of operating an expensive medical instrument with VI techniques. The content of nursing knowledge was adapted from current well-accepted clinical training materials. The VI functions were developed using computer graphic technology with photos of real medical instruments taken by digital camera. We wish the system could provide beginners of nursing education important teaching assistance.
Methodological approaches of health technology assessment.
Goodman, C S; Ahn, R
1999-12-01
In this era of evolving health care systems throughout the world, technology remains the substance of health care. Medical informatics comprises a growing contribution to the technologies used in the delivery and management of health care. Diverse, evolving technologies include artificial neural networks, computer-assisted surgery, computer-based patient records, hospital information systems, and more. Decision-makers increasingly demand well-founded information to determine whether or how to develop these technologies, allow them on the market, acquire them, use them, pay for their use, and more. The development and wider use of health technology assessment (HTA) reflects this demand. While HTA offers systematic, well-founded approaches for determining the value of medical informatics technologies, HTA must continue to adapt and refine its methods in response to these evolving technologies. This paper provides a basic overview of HTA principles and methods.
Research and development of LANDSAT-based crop inventory techniques
NASA Technical Reports Server (NTRS)
Horvath, R.; Cicone, R. C.; Malila, W. A. (Principal Investigator)
1982-01-01
A wide spectrum of technology pertaining to the inventory of crops using LANDSAT without in situ training data is addressed. Methods considered include Bayesian based through-the-season methods, estimation technology based on analytical profile fitting methods, and expert-based computer aided methods. Although the research was conducted using U.S. data, the adaptation of the technology to the Southern Hemisphere, especially Argentina was considered.
Computer- and Internet-related intellectual property issues
NASA Astrophysics Data System (ADS)
Meyer, Stuart P.
2001-05-01
Computer-related technologies, such as the Internet, have posed new challenges for intellectual property law. Legislation and court decisions impacting patents, copyrights, trade secrets and trademarks have adapted intellectual property law to address new issues brought about by such emerging technologies. As the pace of technological change continues to increase, intellectual property law will need to keep up. Accordingly, the balance struck by intellectual property laws today will likely be set askew by technological changes in the future. Engineers need to consider not only the law as it exists today, but also how it might change in the future. Likewise, lawyers and judges need to consider legal issues not only in view of the current state of the art in technology, but also with an eye to technologies yet to come.
Papadiochou, Sofia; Pissiotis, Argirios L
2018-04-01
The comparative assessment of computer-aided design and computer-aided manufacturing (CAD-CAM) technology and other fabrication techniques pertaining to marginal adaptation should be documented. Limited evidence exists on the effect of restorative material on the performance of a CAD-CAM system relative to marginal adaptation. The purpose of this systematic review was to investigate whether the marginal adaptation of CAD-CAM single crowns, fixed dental prostheses, and implant-retained fixed dental prostheses or their infrastructures differs from that obtained by other fabrication techniques using a similar restorative material and whether it depends on the type of restorative material. An electronic search of English-language literature published between January 1, 2000, and June 30, 2016, was conducted of the Medline/PubMed database. Of the 55 included comparative studies, 28 compared CAD-CAM technology with conventional fabrication techniques, 12 contrasted CAD-CAM technology and copy milling, 4 compared CAD-CAM milling with direct metal laser sintering (DMLS), and 22 investigated the performance of a CAD-CAM system regarding marginal adaptation in restorations/infrastructures produced with different restorative materials. Most of the CAD-CAM restorations/infrastructures were within the clinically acceptable marginal discrepancy (MD) range. The performance of a CAD-CAM system relative to marginal adaptation is influenced by the restorative material. Compared with CAD-CAM, most of the heat-pressed lithium disilicate crowns displayed equal or smaller MD values. Slip-casting crowns exhibited similar or better marginal accuracy than those fabricated with CAD-CAM. Cobalt-chromium and titanium implant infrastructures produced using a CAD-CAM system elicited smaller MD values than zirconia. The majority of cobalt-chromium restorations/infrastructures produced by DMLS displayed better marginal accuracy than those fabricated with the casting technique. Compared with copy milling, the majority of zirconia restorations/infrastructures produced by CAD-CAM milling exhibited better marginal adaptation. No clear conclusions can be drawn about the superiority of CAD-CAM milling over the casting technique and DMLS regarding marginal adaptation. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Continuing challenges for computer-based neuropsychological tests.
Letz, Richard
2003-08-01
A number of issues critical to the development of computer-based neuropsychological testing systems that remain continuing challenges to their widespread use in occupational and environmental health are reviewed. Several computer-based neuropsychological testing systems have been developed over the last 20 years, and they have contributed substantially to the study of neurologic effects of a number of environmental exposures. However, many are no longer supported and do not run on contemporary personal computer operating systems. Issues that are continuing challenges for development of computer-based neuropsychological tests in environmental and occupational health are discussed: (1) some current technological trends that generally make test development more difficult; (2) lack of availability of usable speech recognition of the type required for computer-based testing systems; (3) implementing computer-based procedures and tasks that are improvements over, not just adaptations of, their manually-administered predecessors; (4) implementing tests of a wider range of memory functions than the limited range now available; (5) paying more attention to motivational influences that affect the reliability and validity of computer-based measurements; and (6) increasing the usability of and audience for computer-based systems. Partial solutions to some of these challenges are offered. The challenges posed by current technological trends are substantial and generally beyond the control of testing system developers. Widespread acceptance of the "tablet PC" and implementation of accurate small vocabulary, discrete, speaker-independent speech recognition would enable revolutionary improvements to computer-based testing systems, particularly for testing memory functions not covered in existing systems. Dynamic, adaptive procedures, particularly ones based on item-response theory (IRT) and computerized-adaptive testing (CAT) methods, will be implemented in new tests that will be more efficient, reliable, and valid than existing test procedures. These additional developments, along with implementation of innovative reporting formats, are necessary for more widespread acceptance of the testing systems.
NASA Technical Reports Server (NTRS)
Cockrell, Charles E., Jr.
2003-01-01
The Next Generation Launch Technology (NGLT) program, Vehicle Systems Research and Technology (VSR&T) project is pursuing technology advancements in aerothermodynamics, aeropropulsion and flight mechanics to enable development of future reusable launch vehicle (RLV) systems. The current design trade space includes rocket-propelled, hypersonic airbreathing and hybrid systems in two-stage and single-stage configurations. Aerothermodynamics technologies include experimental and computational databases to evaluate stage separation of two-stage vehicles as well as computational and trajectory simulation tools for this problem. Additionally, advancements in high-fidelity computational tools and measurement techniques are being pursued along with the study of flow physics phenomena, such as boundary-layer transition. Aero-propulsion technology development includes scramjet flowpath development and integration, with a current emphasis on hypervelocity (Mach 10 and above) operation, as well as the study of aero-propulsive interactions and the impact on overall vehicle performance. Flight mechanics technology development is focused on advanced guidance, navigation and control (GN&C) algorithms and adaptive flight control systems for both rocket-propelled and airbreathing vehicles.
ERIC Educational Resources Information Center
Panoutsopoulos, Hercules; Donert, Karl; Papoutsis, Panos; Kotsanis, Ioannis
2015-01-01
During the last few years, ongoing developments in the technological field of Cloud computing have initiated discourse on the potential of the Cloud to be systematically exploited in educational contexts. Research interest has been stimulated by a range of advantages of Cloud technologies (e.g. adaptability, flexibility, scalability,…
School Librarians as Technology Leaders: An Evolution in Practice
ERIC Educational Resources Information Center
Wine, Lois D.
2016-01-01
The role of school librarians has a history of radical change. School librarians adapted to take on responsibility for technology and audio-visual materials that were introduced in schools in earlier eras. With the advent of the Information Age in the middle of the 20th century and the subsequent development of personal computers and the Internet,…
ERIC Educational Resources Information Center
Pursell, David P.
2009-01-01
Students of organic chemistry traditionally make 3 x 5 in. flash cards to assist learning nomenclature, structures, and reactions. Advances in educational technology have enabled flash cards to be viewed on computers, offering an endless array of drilling and feedback for students. The current generation of students is less inclined to use…
ERIC Educational Resources Information Center
Peng, Hsinyi; Chuang, Po-Ya; Hwang, Gwo-Jen; Chu, Hui-Chun; Wu, Ting-Ting; Huang, Shu-Xian
2009-01-01
Researchers have conducted various studies on applying wireless communication and ubiquitous computing technologies to education, so that the technologies can provide learners and educators with more active and adaptive support. This study proposes a Ubiquitous Performance-support System (UPSS) that can facilitate the seamless use of powerful new…
ERIC Educational Resources Information Center
Carranza, Mario
2016-01-01
This paper addresses the process of transcribing and annotating spontaneous non-native speech with the aim of compiling a training corpus for the development of Computer Assisted Pronunciation Training (CAPT) applications, enhanced with Automatic Speech Recognition (ASR) technology. To better adapt ASR technology to CAPT tools, the recognition…
Hospital positioning: a strategic tool for the 1990s.
San Augustine, A J; Long, W J; Pantzallis, J
1992-03-01
The authors extend the process of market positioning in the health care sector by focusing on the simultaneous utilization of traditional research methods and emerging new computer-based adaptive perceptual mapping technologies and techniques.
Dynamic electronic institutions in agent oriented cloud robotic systems.
Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice
2015-01-01
The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.
Tomographic methods in flow diagnostics
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
1993-01-01
This report presents a viewpoint of tomography that should be well adapted to currently available optical measurement technology as well as the needs of computational and experimental fluid dynamists. The goals in mind are to record data with the fastest optical array sensors; process the data with the fastest parallel processing technology available for small computers; and generate results for both experimental and theoretical data. An in-depth example treats interferometric data as it might be recorded in an aeronautics test facility, but the results are applicable whenever fluid properties are to be measured or applied from projections of those properties. The paper discusses both computed and neural net calibration tomography. The report also contains an overview of key definitions and computational methods, key references, computational problems such as ill-posedness, artifacts, missing data, and some possible and current research topics.
Adaptive thinking & leadership simulation game training for special forces officers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raybourn, Elaine Marie; Mendini, Kip; Heneghan, Jerry
Complex problem solving approaches and novel strategies employed by the military at the squad, team, and commander level are often best learned experimentally. Since live action exercises can be costly, advances in simulation game training technology offer exciting ways to enhance current training. Computer games provide an environment for active, critical learning. Games open up possibilities for simultaneous learning on multiple levels; players may learn from contextual information embedded in the dynamics of the game, the organic process generated by the game, and through the risks, benefits, costs, outcomes, and rewards of alternative strategies that result from decision making. Inmore » the present paper we discuss a multiplayer computer game simulation created for the Adaptive Thinking & Leadership (ATL) Program to train Special Forces Team Leaders. The ATL training simulation consists of a scripted single-player and an immersive multiplayer environment for classroom use which leverages immersive computer game technology. We define adaptive thinking as consisting of competencies such as negotiation and consensus building skills, the ability to communicate effectively, analyze ambiguous situations, be self-aware, think innovatively, and critically use effective problem solving skills. Each of these competencies is an essential element of leader development training for the U.S. Army Special Forces. The ATL simulation is used to augment experiential learning in the curriculum for the U.S. Army JFK Special Warfare Center & School (SWCS) course in Adaptive Thinking & Leadership. The school is incorporating the ATL simulation game into two additional training pipelines (PSYOPS and Civil Affairs Qualification Courses) that are also concerned with developing cultural awareness, interpersonal communication adaptability, and rapport-building skills. In the present paper, we discuss the design, development, and deployment of the training simulation, and emphasize how the multiplayer simulation game is successfully used in the Special Forces Officer training program.« less
Computer interfaces for the visually impaired
NASA Technical Reports Server (NTRS)
Higgins, Gerry
1991-01-01
Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.
Human/Computer Interfacing in Educational Environments.
ERIC Educational Resources Information Center
Sarti, Luigi
1992-01-01
This discussion of educational applications of user interfaces covers the benefits of adopting database techniques in organizing multimedia materials; the evolution of user interface technology, including teletype interfaces, analogic overlay graphics, window interfaces, and adaptive systems; application design problems, including the…
Use of adaptive walls in 2D tests
NASA Technical Reports Server (NTRS)
Archambaud, J. P.; Chevallier, J. P.
1984-01-01
A new method for computing the wall effects gives precise answers to some questions arising in adaptive wall concept applications: length of adapted regions, fairings with up and downstream regions, residual misadjustments effects, reference conditions. The acceleration of the iterative process convergence and the development of an efficient technology used in CERT T2 wind tunnels give in a single run the required test conditions. Samples taken from CAST 7 tests demonstrate the efficiency of the whole process to obtain significant results with considerations of tridimensional case extension.
NASA Technical Reports Server (NTRS)
Duncan, K. M.; Harm, D. L.; Crosier, W. G.; Worthington, J. W.
1993-01-01
A unique training device is being developed at the Johnson Space Center Neurosciences Laboratory to help reduce or eliminate Space Motion Sickness (SMS) and spatial orientation disturbances that occur during spaceflight. The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) uses virtual reality technology to simulate some sensory rearrangements experienced by astronauts in microgravity. By exposing a crew member to this novel environment preflight, it is expected that he/she will become partially adapted, and thereby suffer fewer symptoms inflight. The DOME PAT is a 3.7 m spherical dome, within which a 170 by 100 deg field of view computer-generated visual database is projected. The visual database currently in use depicts the interior of a Shuttle spacelab. The trainee uses a six degree-of-freedom, isometric force hand controller to navigate through the virtual environment. Alternatively, the trainee can be 'moved' about within the virtual environment by the instructor, or can look about within the environment by wearing a restraint that controls scene motion in response to head movements. The computer system is comprised of four personal computers that provide the real time control and user interface, and two Silicon Graphics computers that generate the graphical images. The image generator computers use custom algorithms to compensate for spherical image distortion, while maintaining a video update rate of 30 Hz. The DOME PAT is the first such system known to employ virtual reality technology to reduce the untoward effects of the sensory rearrangement associated with exposure to microgravity, and it does so in a very cost-effective manner.
ERIC Educational Resources Information Center
Lahm, Elizabeth A.; Morrissette, Sandra K.
This collection of materials describes different types of computer applications and software that can help students with disabilities. It contains information on: (1) Easy Access, a feature of the systems software on every Macintosh computer that allows use of the keypad instead of the mouse, options for slow keys, and options for sticky keys; (2)…
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Khramushin, Vasily
2016-02-01
The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
A tale of three bio-inspired computational approaches
NASA Astrophysics Data System (ADS)
Schaffer, J. David
2014-05-01
I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.
MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank Mueller
2009-02-05
MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less
Sittig, Dean F.; Singh, Hardeep
2011-01-01
Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an 8-dimensional model specifically designed to address the socio-technical challenges involved in design, development, implementation, use, and evaluation of HIT within complex adaptive healthcare systems. The 8 dimensions are not independent, sequential, or hierarchical, but rather are interdependent and interrelated concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support, and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the “language” of clinical applications. The human computer interface includes all aspects of the computer that users can see, touch, or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end-user, including potential patient-users. Workflow and communication are the processes or steps involved in assuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organizational features (e.g., policies, procedures, and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation. PMID:20959322
Sittig, Dean F; Singh, Hardeep
2010-10-01
Conceptual models have been developed to address challenges inherent in studying health information technology (HIT). This manuscript introduces an eight-dimensional model specifically designed to address the sociotechnical challenges involved in design, development, implementation, use and evaluation of HIT within complex adaptive healthcare systems. The eight dimensions are not independent, sequential or hierarchical, but rather are interdependent and inter-related concepts similar to compositions of other complex adaptive systems. Hardware and software computing infrastructure refers to equipment and software used to power, support and operate clinical applications and devices. Clinical content refers to textual or numeric data and images that constitute the 'language' of clinical applications. The human--computer interface includes all aspects of the computer that users can see, touch or hear as they interact with it. People refers to everyone who interacts in some way with the system, from developer to end user, including potential patient-users. Workflow and communication are the processes or steps involved in ensuring that patient care tasks are carried out effectively. Two additional dimensions of the model are internal organisational features (eg, policies, procedures and culture) and external rules and regulations, both of which may facilitate or constrain many aspects of the preceding dimensions. The final dimension is measurement and monitoring, which refers to the process of measuring and evaluating both intended and unintended consequences of HIT implementation and use. We illustrate how our model has been successfully applied in real-world complex adaptive settings to understand and improve HIT applications at various stages of development and implementation.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
Dynamically adaptive data-driven simulation of extreme hydrological flows
NASA Astrophysics Data System (ADS)
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
Data systems and computer science: Neural networks base R/T program overview
NASA Technical Reports Server (NTRS)
Gulati, Sandeep
1991-01-01
The research base, in the U.S. and abroad, for the development of neural network technology is discussed. The technical objectives are to develop and demonstrate adaptive, neural information processing concepts. The leveraging of external funding is also discussed.
An Architecture for Cross-Cloud System Management
NASA Astrophysics Data System (ADS)
Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad
The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.
Multi-petascale highly efficient parallel supercomputer
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng
2015-07-14
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.
Cyber-workstation for computational neuroscience.
Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C
2010-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.
Cyber-Workstation for Computational Neuroscience
DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.
2009-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436
NASA Astrophysics Data System (ADS)
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
Adopting best practices: "Agility" moves from software development to healthcare project management.
Kitzmiller, Rebecca; Hunt, Eleanor; Sproat, Sara Breckenridge
2006-01-01
It is time for a change in mindset in how nurses operationalize system implementations and manage projects. Computers and systems have evolved over time from unwieldy mysterious machines of the past to ubiquitous computer use in every aspect of daily lives and work sites. Yet, disconcertingly, the process used to implement these systems has not evolved. Technology implementation does not need to be a struggle. It is time to adapt traditional plan-driven implementation methods to incorporate agile techniques. Agility is a concept borrowed from software development and is presented here because it encourages flexibility, adaptation, and continuous learning as part of the implementation process. Agility values communication and harnesses change to an advantage, which facilitates the natural evolution of an adaptable implementation process. Specific examples of agility in an implementation are described, and plan-driven implementation stages are adapted to incorporate relevant agile techniques. This comparison demonstrates how an agile approach enhances traditional implementation techniques to meet the demands of today's complex healthcare environments.
Influence of technology on magnetic tape storage device characteristics
NASA Technical Reports Server (NTRS)
Gniewek, John J.; Vogel, Stephen M.
1994-01-01
There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.
NASA Astrophysics Data System (ADS)
Yamamoto, Toshiaki; Ueda, Tetsuro; Obana, Sadao
As one of the dynamic spectrum access technologies, “cognitive radio technology,” which aims to improve the spectrum efficiency, has been studied. In cognitive radio networks, each node recognizes radio conditions, and according to them, optimizes its wireless communication routes. Cognitive radio systems integrate the heterogeneous wireless systems not only by switching over them but also aggregating and utilizing them simultaneously. The adaptive control of switchover use and concurrent use of various wireless systems will offer a stable and flexible wireless communication. In this paper, we propose the adaptive traffic route control scheme that provides high quality of service (QoS) for cognitive radio technology, and examine the performance of the proposed scheme through the field trials and computer simulations. The results of field trials show that the adaptive route control according to the radio conditions improves the user IP throughput by more than 20% and reduce the one-way delay to less than 1/6 with the concurrent use of IEEE802.16 and IEEE802.11 wireless media. Moreover, the simulation results assuming hundreds of mobile terminals reveal that the number of users receiving the required QoS of voice over IP (VoIP) service and the total network throughput of FTP users increase by more than twice at the same time with the proposed algorithm. The proposed adaptive traffic route control scheme can enhance the performances of the cognitive radio technologies by providing the appropriate communication routes for various applications to satisfy their required QoS.
Combining Brain–Computer Interfaces and Assistive Technologies: State-of-the-Art and Challenges
Millán, J. d. R.; Rupp, R.; Müller-Putz, G. R.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.; Cincotti, F.; Kübler, A.; Leeb, R.; Neuper, C.; Müller, K.-R.; Mattia, D.
2010-01-01
In recent years, new research has brought the field of electroencephalogram (EEG)-based brain–computer interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely, “Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user–machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human–computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices. PMID:20877434
Adapting Technological Interventions to Meet the Needs of Priority Populations.
Linke, Sarah E; Larsen, Britta A; Marquez, Becky; Mendoza-Vasconez, Andrea; Marcus, Bess H
2016-01-01
Cardiovascular diseases (CVD) comprise the leading cause of mortality worldwide, accounting for 3 in 10 deaths. Individuals with certain risk factors, including tobacco use, obesity, low levels of physical activity, type 2 diabetes mellitus, racial/ethnic minority status and low socioeconomic status, experience higher rates of CVD and are, therefore, considered priority populations. Technological devices such as computers and smartphones are now routinely utilized in research studies aiming to prevent CVD and its risk factors, and they are also rampant in the public and private health sectors. Traditional health behavior interventions targeting these risk factors have been adapted for technology-based approaches. This review provides an overview of technology-based interventions conducted in these priority populations as well as the challenges and gaps to be addressed in future research. Researchers currently possess tremendous opportunities to engage in technology-based implementation and dissemination science to help spread evidence-based programs focusing on CVD risk factors in these and other priority populations. Copyright © 2016 Elsevier Inc. All rights reserved.
Intelligent Systems For Aerospace Engineering: An Overview
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.
2003-01-01
Intelligent systems are nature-inspired, mathematically sound, computationally intensive problem solving tools and methodologies that have become extremely important for advancing the current trends in information technology. Artificially intelligent systems currently utilize computers to emulate various faculties of human intelligence and biological metaphors. They use a combination of symbolic and sub-symbolic systems capable of evolving human cognitive skills and intelligence, not just systems capable of doing things humans do not do well. Intelligent systems are ideally suited for tasks such as search and optimization, pattern recognition and matching, planning, uncertainty management, control, and adaptation. In this paper, the intelligent system technologies and their application potential are highlighted via several examples.
Intelligent Systems for Aerospace Engineering: An Overview
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje
2002-01-01
Intelligent systems are nature-inspired, mathematically sound, computationally intensive problem solving tools and methodologies that have become extremely important for advancing the current trends in information technology. Artificially intelligent systems currently utilize computers to emulate various faculties of human intelligence and biological metaphors. They use a combination of symbolic and sub-symbolic systems capable of evolving human cognitive skills and intelligence, not just systems capable of doing things humans do not do well. Intelligent systems are ideally suited for tasks such as search and optimization, pattern recognition and matching, planning, uncertainty management, control, and adaptation. In this paper, the intelligent system technologies and their application potential are highlighted via several examples.
HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation
NASA Technical Reports Server (NTRS)
Sterling, Thomas; Bergman, Larry
2000-01-01
Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.
Adaptive Instrument Module: Space Instrument Controller "Brain" through Programmable Logic Devices
NASA Technical Reports Server (NTRS)
Darrin, Ann Garrison; Conde, Richard; Chern, Bobbie; Luers, Phil; Jurczyk, Steve; Mills, Carl; Day, John H. (Technical Monitor)
2001-01-01
The Adaptive Instrument Module (AIM) will be the first true demonstration of reconfigurable computing with field-programmable gate arrays (FPGAs) in space, enabling the 'brain' of the system to evolve or adapt to changing requirements. In partnership with NASA Goddard Space Flight Center and the Australian Cooperative Research Centre for Satellite Systems (CRC-SS), APL has built the flight version to be flown on the Australian university-class satellite FEDSAT. The AIM provides satellites the flexibility to adapt to changing mission requirements by reconfiguring standardized processing hardware rather than incurring the large costs associated with new builds. This ability to reconfigure the processing in response to changing mission needs leads to true evolveable computing, wherein the instrument 'brain' can learn from new science data in order to perform state-of-the-art data processing. The development of the AIM is significant in its enormous potential to reduce total life-cycle costs for future space exploration missions. The advent of RAM-based FPGAs whose configuration can be changed at any time has enabled the development of the AIM for processing tasks that could not be performed in software. The use of the AIM enables reconfiguration of the FPGA circuitry while the spacecraft is in flight, with many accompanying advantages. The AIM demonstrates the practicalities of using reconfigurable computing hardware devices by conducting a series of designed experiments. These include the demonstration of implementing data compression, data filtering, and communication message processing and inter-experiment data computation. The second generation is the Adaptive Processing Template (ADAPT) which is further described in this paper. The next step forward is to make the hardware itself adaptable and the ADAPT pursues this challenge by developing a reconfigurable module that will be capable of functioning efficiently in various applications. ADAPT will take advantage of radiation tolerant RAM-based field programmable gate array (FPGA) technology to develop a reconfigurable processor that combines the flexibility of a general purpose processor running software with the performance of application specific processing hardware for a variety of high performance computing applications.
Using adaptive grid in modeling rocket nozzle flow
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1992-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.
Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments
NASA Astrophysics Data System (ADS)
Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin
The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
Comprehensive Solar-Terrestrial Environment Model (COSTEM) for Space Weather Predictions
2007-07-01
research in data assimilation methodologies applicable to the space environment, as well as "threat adaptive" grid computing technologies, where we...SWMF is tested by(SWMF) [29, 43] was designed in 2001 and has sse et xriig mlil ope been developed to integrate and couple several system tests...its components. The night on several computer/compiler platforms. main design goals of the SWMF were to minimizedocumented. mai deigngoas o th SWF
The future of computing--new architectures and new technologies.
Warren, P
2004-02-01
All modern computers are designed using the 'von Neumann' architecture and built using silicon transistor technology. Both architecture and technology have been remarkably successful. Yet there are a range of problems for which this conventional architecture is not particularly well adapted, and new architectures are being proposed to solve these problems, in particular based on insight from nature. Transistor technology has enjoyed 50 years of continuing progress. However, the laws of physics dictate that within a relatively short time period this progress will come to an end. New technologies, based on molecular and biological sciences as well as quantum physics, are vying to replace silicon, or at least coexist with it and extend its capability. The paper describes these novel architectures and technologies, places them in the context of the kinds of problems they might help to solve, and predicts their possible manner and time of adoption. Finally it describes some key questions and research problems associated with their use.
The research on thermal adaptability reinforcement technology for photovoltaic modules
NASA Astrophysics Data System (ADS)
Su, Nana; Zhou, Guozhong
2015-10-01
Nowadays, Photovoltaic module contains more high-performance components in smaller space. It is also demanded to work in severe temperature condition for special use, such as aerospace. As temperature rises, the failure rate will increase exponentially which makes reliability significantly reduce. In order to improve thermal adaptability of photovoltaic module, this paper makes a research on reinforcement technologies. Thermoelectric cooler is widely used in aerospace which has harsh working environment. So, theoretical formulas for computing refrigerating efficiency, refrigerating capacity and temperature difference are described in detail. The optimum operating current of three classical working condition is obtained which can be used to guide the design of driven circuit. Taken some equipment enclosure for example, we use thermoelectric cooler to reinforce its thermal adaptability. By building physical model and thermal model with the aid of physical dimension and constraint condition, the model is simulated by Flotherm. The temperature field cloud is shown to verify the effectiveness of reinforcement.
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Mineck, Raymond E.; Barnwell, Richard W.; Kemp, William B., Jr.
1986-01-01
About a decade ago, interest in alleviating wind tunnel wall interference was renewed by advances in computational aerodynamics, concepts of adaptive test section walls, and plans for high Reynolds number transonic test facilities. Selection of NASA Langley cryogenic concept for the National Transonic Facility (NTF) tended to focus the renewed wall interference efforts. A brief overview and current status of some Langley sponsored transonic wind tunnel wall interference research are presented. Included are continuing efforts in basic wall flow studies, wall interference assessment/correction procedures, and adaptive wall technology.
A performance analysis of advanced I/O architectures for PC-based network file servers
NASA Astrophysics Data System (ADS)
Huynh, K. D.; Khoshgoftaar, T. M.
1994-12-01
In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.
No Pervasive Computing Without Intelligent Systems
NASA Astrophysics Data System (ADS)
Thompson, S. G.; Azvine, B.
It is interesting to think about the technologies that have become part of our everyday lives and compare their invention and development with those that have fallen by the wayside. Examples of failed technologies such as electrical cars and satellite mobile telephones are not uncommon, but more interestingly numerous other technologies such as instant messaging, text messaging, and b2c eCommerce have moved through the cycle of initial rejection, adoptionby a new user community, and adaptation to its needs, despite the early scepticism of many users and commentators.
Vascular surgical data registries for small computers.
Kaufman, J L; Rosenberg, N
1984-08-01
Recent designs for computer-based vascular surgical registries and clinical data bases have employed large centralized systems with formal programming and mass storage. Small computers, of the types created for office use or for word processing, now contain sufficient speed and memory storage capacity to allow construction of decentralized office-based registries. Using a standardized dictionary of terms and a method of data organization adapted to word processing, we have created a new vascular surgery data registry, "VASREG." Data files are organized without programming, and a limited number of powerful logical statements in English are used for sorting. The capacity is 25,000 records with current inexpensive memory technology. VASREG is adaptable to computers made by a variety of manufacturers, and interface programs are available for conversion of the word processor formated registry data into forms suitable for analysis by programs written in a standard programming language. This is a low-cost clinical data registry available to any physician. With a standardized dictionary, preparation of regional and national statistical summaries may be facilitated.
Mass Media and the School: Descartes or McLuhan?
ERIC Educational Resources Information Center
Schaeffer, Pierre
1980-01-01
Compares the world of learning with the world of the media, with emphasis on the areas of common interest. Discusses areas of potential cooperation, including local audiovisual centers, adaptation of new media to educational content, computer technology, telematics, and accumulation of audiovisual stock on topics pertinent to education. (DB)
Scheduling quality of precise form sets which consist of tasks of circular type in GRID systems
NASA Astrophysics Data System (ADS)
Saak, A. E.; Kureichik, V. V.; Kravchenko, Y. A.
2018-05-01
Users’ demand in computer power and rise of technology favour the arrival of Grid systems. The quality of Grid systems’ performance depends on computer and time resources scheduling. Grid systems with a centralized structure of the scheduling system and user’s task are modeled by resource quadrant and re-source rectangle accordingly. A Non-Euclidean heuristic measure, which takes into consideration both the area and the form of an occupied resource region, is used to estimate scheduling quality of heuristic algorithms. The authors use sets, which are induced by the elements of square squaring, as an example of studying the adapt-ability of a level polynomial algorithm with an excess and the one with minimal deviation.
NASA Astrophysics Data System (ADS)
Papers are presented on local area networks; formal methods for communication protocols; computer simulation of communication systems; spread spectrum and coded communications; tropical radio propagation; VLSI for communications; strategies for increasing software productivity; multiple access communications; advanced communication satellite technologies; and spread spectrum systems. Topics discussed include Space Station communication and tracking development and design; transmission networks; modulation; data communications; computer network protocols and performance; and coding and synchronization. Consideration is given to free space optical communications systems; VSAT communication networks; network topology design; advances in adaptive filtering echo cancellation and adaptive equalization; advanced signal processing for satellite communications; the elements, design, and analysis of fiber-optic networks; and advances in digital microwave systems.
Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries
NASA Astrophysics Data System (ADS)
Deiterding, Ralf; Wood, Stephen L.
2016-09-01
Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.; Butsenko, E. V.
2017-10-01
This paper discusses the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. On the basis of network modeling proposed a new economic and mathematical model and a method for solving the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. Network economic and mathematical modeling allows you to determine the optimal time and calendar schedule for the implementation of the investment project and serves as an instrument to increase the economic potential and competitiveness of the enterprise. On a meaningful practical example, the processes of forming network models are shown, including the definition of the sequence of actions of a particular investment projecting process, the network-based work schedules are constructed. The calculation of the parameters of network models is carried out. Optimal (critical) paths have been formed and the optimal time for implementing the chosen technologies of the investment project has been calculated. It also shows the selection of the optimal technology from a set of possible technologies for project implementation, taking into account the time and cost of the work. The proposed model and method for solving the problem of managing investment projects can serve as a basis for the development, creation and application of appropriate computer information systems to support the adoption of managerial decisions by business people.
2011-11-01
based perception of each team member‟s behavior and physiology with the goal of predicting unobserved variables (e.g., cognitive state). Along with...sensing technologies are showing promise as enablers of computer-based perception of each team member‟s behavior and physiology with the goal...an essential element of team performance. The perception that other team members may be unable to perform their tasks is detrimental to trust and
Reconfigurable Hardware Adapts to Changing Mission Demands
NASA Technical Reports Server (NTRS)
2003-01-01
A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.
APPLICATION OF INFORMATION AND COMMUNICATION TECHNOLOGIES IN MEDICAL EDUCATION
Al-Tamimi, Dalal M.
2003-01-01
The recognition that information and communication technologies should play an increasingly important role in medical education is a key to educating physicians in the 21st century. Computer use in medical education includes, Internet hypermedia/multimedia technologies, medical informatics, distance learning and telemedicine. Adaptation to the use of these technologies should ideally start from the elementary school level. Medical schools must introduce medical informatics courses very early in the medical curriculum. Teachers will need regular CME courses to prepare and update themselves with the changing circumstances. Our infrastructure must be prepared for the new developments with computer labs, basic skill labs, close circuit television facilities, virtual class rooms, smart class rooms, simulated teaching facilities, and distance teaching by tele-techniques. Our existing manpower including, doctors, nurses, technicians, librarians, and administration personal require hands-on training, while new recruitment will have to emphasize compulsory knowledge of and familiarity with information technology. This paper highlights these subjects in detail as a means to prepare us to meet the challenges of the 21st century. PMID:23011983
Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity.
Zander, Thorsten O; Krol, Laurens R; Birbaumer, Niels P; Gramann, Klaus
2016-12-27
The effectiveness of today's human-machine interaction is limited by a communication bottleneck as operators are required to translate high-level concepts into a machine-mandated sequence of instructions. In contrast, we demonstrate effective, goal-oriented control of a computer system without any form of explicit communication from the human operator. Instead, the system generated the necessary input itself, based on real-time analysis of brain activity. Specific brain responses were evoked by violating the operators' expectations to varying degrees. The evoked brain activity demonstrated detectable differences reflecting congruency with or deviations from the operators' expectations. Real-time analysis of this activity was used to build a user model of those expectations, thus representing the optimal (expected) state as perceived by the operator. Based on this model, which was continuously updated, the computer automatically adapted itself to the expectations of its operator. Further analyses showed this evoked activity to originate from the medial prefrontal cortex and to exhibit a linear correspondence to the degree of expectation violation. These findings extend our understanding of human predictive coding and provide evidence that the information used to generate the user model is task-specific and reflects goal congruency. This paper demonstrates a form of interaction without any explicit input by the operator, enabling computer systems to become neuroadaptive, that is, to automatically adapt to specific aspects of their operator's mindset. Neuroadaptive technology significantly widens the communication bottleneck and has the potential to fundamentally change the way we interact with technology.
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2001-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Telemedicine: an emerging health care technology.
Myers, Mary R
2003-01-01
Telemedicine uses advanced telecommunication technologies to exchange health information and provide health care services across geographic, time, social, and cultural barriers. All telemedicine applications require the use of the electronic transfer of information. Telemedicine encompasses computer technologies using narrow and high bandwidths for specific types of information transmission, broadcast video, compressed video, full motion video, and even virtual reality. There are many types of common medical devices that have been adapted for use with telemedicine technology, and many clinical services can be provided via telemedicine to patients who live in physician shortage areas. The greatest challenges for telemedicine in the twenty-first century are financing, safety standards, security, and infrastructure.
Challenging Technology, and Technology Infusion into 21st Century
NASA Technical Reports Server (NTRS)
Chau, S. N.; Hunter, D. J.
2001-01-01
In preparing for the space exploration challenges of the next century, the National Aeronautics and Space Administration (NASA) Center for Integrated Space Micro-Systems (CISM) is chartered to develop advanced spacecraft systems that can be adapted for a large spectrum of future space missions. Enabling this task are revolutions in the miniaturization of electrical, mechanical, and computational functions. On the other hand, these revolutionary technologies usually have much lower readiness levels than those required by flight projects. The mission of the Advanced Micro Spacecraft (AMS) task in CISM is to bridge the readiness gap between advanced technologies and flight projects. Additional information is contained in the original extended abstract.
Methods for transition toward computer assisted cognitive examination.
Jurica, P; Valenzi, S; Struzik, Z R; Cichocki, A
2015-01-01
We present a software framework which enables the extension of current methods for the assessment of cognitive fitness using recent technological advances. Screening for cognitive impairment is becoming more important as the world's population grows older. Current methods could be enhanced by use of computers. Introduction of new methods to clinics requires basic tools for collection and communication of collected data. To develop tools that, with minimal interference, offer new opportunities for the enhancement of the current interview based cognitive examinations. We suggest methods and discuss process by which established cognitive tests can be adapted for data collection through digitization by pen enabled tablets. We discuss a number of methods for evaluation of collected data, which promise to increase the resolution and objectivity of the common scoring strategy based on visual inspection. By involving computers in the roles of both instructing and scoring, we aim to increase the precision and reproducibility of cognitive examination. The tools provided in Python framework CogExTools available at http://bsp. brain.riken.jp/cogextools/ enable the design, application and evaluation of screening tests for assessment of cognitive impairment. The toolbox is a research platform; it represents a foundation for further collaborative development by the wider research community and enthusiasts. It is free to download and use, and open-source. We introduce a set of open-source tools that facilitate the design and development of new cognitive tests for modern technology. We provide these tools in order to enable the adaptation of technology for cognitive examination in clinical settings. The tools provide the first step in a possible transition toward standardized mental state examination using computers.
Toward a Dynamically Reconfigurable Computing and Communication System for Small Spacecraft
NASA Technical Reports Server (NTRS)
Kifle, Muli; Andro, Monty; Tran, Quang K.; Fujikawa, Gene; Chu, Pong P.
2003-01-01
Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific events will require rapidly changing network topologies, distributed processing power, and a dynamic resource management strategy. Optimum utilization and configuration of spacecraft communications and navigation resources will be critical in meeting the demand of these stringent mission requirements. There are two important trends to follow with respect to NASA's (National Aeronautics and Space Administration) future scientific missions: the use of multiple satellite systems and the development of an integrated space communications network. Reconfigurable computing and communication systems may enable versatile adaptation of a spacecraft system's resources by dynamic allocation of the processor hardware to perform new operations or to maintain functionality due to malfunctions or hardware faults. Advancements in FPGA (Field Programmable Gate Array) technology make it possible to incorporate major communication and network functionalities in FPGA chips and provide the basis for a dynamically reconfigurable communication system. Advantages of higher computation speeds and accuracy are envisioned with tremendous hardware flexibility to ensure maximum survivability of future science mission spacecraft. This paper discusses the requirements, enabling technologies, and challenges associated with dynamically reconfigurable space communications systems.
Adaptive Grid Based Localized Learning for Multidimensional Data
ERIC Educational Resources Information Center
Saini, Sheetal
2012-01-01
Rapid advances in data-rich domains of science, technology, and business has amplified the computational challenges of "Big Data" synthesis necessary to slow the widening gap between the rate at which the data is being collected and analyzed for knowledge. This has led to the renewed need for efficient and accurate algorithms, framework,…
Adapting Computational Data Structures Technology to Reason about Infinity
ERIC Educational Resources Information Center
Goldberg, Robert; Hammerman, Natalie
2004-01-01
The NCTM curriculum states that students should be able to "compare and contrast the real number system and its various subsystems with regard to their structural characteristics." In evaluating overall conformity to the 1989 standard, the National Council of Teachers of Mathematics (NCTM) requires that "teachers must value and encourage the use…
Technology and Assessment. In Brief: Fast Facts for Policy and Practice No. 5.
ERIC Educational Resources Information Center
Austin, James T.; Mahlman, Robert A.
The process of assessment in career and technical education (CTE) is changing significantly under the influence of forces such as emphasis on assessment for individual and program accountability; emphasis on the investigation of consequences of assessment; emergence of item response theory, which supports computer adaptive testing; and pressure…
Education and Training in Japan in the Cybernetic Age. Program Report No. 85-B2.
ERIC Educational Resources Information Center
Muta, Hiromitsu
The introduction of computers and other microelectronic equipment throughout the Japanese economy has not affected employment negatively, owing to economic growth and the adaptability of the workers and business organizations affected. Because rapid advances in technology are making many specialized skills and areas of knowledge obsolete, it is…
Technology-Enhanced Learning Environments. Case Studies in TESOL Practice Series.
ERIC Educational Resources Information Center
Hanson-Smith, Elizabeth, Ed.
This edited volume presents case studies from Europe, North America, Asia, and the Middle East in which teachers have adapted and pioneered teaching innovations. The book is divided into 4 parts, 12 chapters, and an introduction. Part one, "Building a Computer Learning Center," has two chapters: "Guerilla Tactics: Creating a…
Adaptive 3D Virtual Learning Environments--A Review of the Literature
ERIC Educational Resources Information Center
Scott, Ezequiel; Soria, Alvaro; Campo, Marcelo
2017-01-01
New ways of learning have emerged in the last years by using computers in education. For instance, many Virtual Learning Environments have been widely adopted by educators, obtaining promising outcomes. Recently, these environments have evolved into more advanced ones using 3D technologies and taking into account the individual learner needs and…
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Smart Offices and Intelligent Decision Rooms
NASA Astrophysics Data System (ADS)
Ramos, Carlos; Marreiros, Goreti; Santos, Ricardo; Freitas, Carlos Filipe
Nowadays computing technology research is focused on the development of Smart Environments. Following that line of thought several Smart Rooms projects were developed and their appliances are very diversified. The appliances include projects in the context of workplace or everyday living, entertainment, play and education. These appliances envisage to acquire and apply knowledge about the environment state in order to reason about it so as to define a desired state for its inhabitants and perform adaptation adaptation to these desires and therefore improving their involvement and satisfaction with that environment.
Computational model for behavior shaping as an adaptive health intervention strategy.
Berardi, Vincent; Carretero-González, Ricardo; Klepeis, Neil E; Ghanipoor Machiani, Sahar; Jahangiri, Arash; Bellettiere, John; Hovell, Melbourne
2018-03-01
Adaptive behavioral interventions that automatically adjust in real-time to participants' changing behavior, environmental contexts, and individual history are becoming more feasible as the use of real-time sensing technology expands. This development is expected to improve shortcomings associated with traditional behavioral interventions, such as the reliance on imprecise intervention procedures and limited/short-lived effects. JITAI adaptation strategies often lack a theoretical foundation. Increasing the theoretical fidelity of a trial has been shown to increase effectiveness. This research explores the use of shaping, a well-known process from behavioral theory for engendering or maintaining a target behavior, as a JITAI adaptation strategy. A computational model of behavior dynamics and operant conditioning was modified to incorporate the construct of behavior shaping by adding the ability to vary, over time, the range of behaviors that were reinforced when emitted. Digital experiments were performed with this updated model for a range of parameters in order to identify the behavior shaping features that optimally generated target behavior. Narrowing the range of reinforced behaviors continuously in time led to better outcomes compared with a discrete narrowing of the reinforcement window. Rapid narrowing followed by more moderate decreases in window size was more effective in generating target behavior than the inverse scenario. The computational shaping model represents an effective tool for investigating JITAI adaptation strategies. Model parameters must now be translated from the digital domain to real-world experiments so that model findings can be validated.
Live interactive computer music performance practice
NASA Astrophysics Data System (ADS)
Wessel, David
2002-05-01
A live-performance musical instrument can be assembled around current lap-top computer technology. One adds a controller such as a keyboard or other gestural input device, a sound diffusion system, some form of connectivity processor(s) providing for audio I/O and gestural controller input, and reactive real-time native signal processing software. A system consisting of a hand gesture controller; software for gesture analysis and mapping, machine listening, composition, and sound synthesis; and a controllable radiation pattern loudspeaker are described. Interactivity begins in the set up wherein the speaker-room combination is tuned with an LMS procedure. This system was designed for improvisation. It is argued that software suitable for carrying out an improvised musical dialog with another performer poses special challenges. The processes underlying the generation of musical material must be very adaptable, capable of rapid changes in musical direction. Machine listening techniques are used to help the performer adapt to new contexts. Machine learning can play an important role in the development of such systems. In the end, as with any musical instrument, human skill is essential. Practice is required not only for the development of musically appropriate human motor programs but for the adaptation of the computer-based instrument as well.
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
Putting the brain to work: neuroergonomics past, present, and future.
Parasuraman, Raja; Wilson, Glenn F
2008-06-01
The authors describe research and applications in prominent areas of neuroergonomics. Because human factors/ergonomics examines behavior and mind at work, it should include the study of brain mechanisms underlying human performance. Neuroergonomic studies are reviewed in four areas: workload and vigilance, adaptive automation, neuroengineering, and molecular genetics and individual differences. Neuroimaging studies have helped identify the components of mental workload, workload assessment in complex tasks, and resource depletion in vigilance. Furthermore, real-time neurocognitive assessment of workload can trigger adaptive automation. Neural measures can also drive brain-computer interfaces to provide disabled users new communication channels. Finally, variants of particular genes can be associated with individual differences in specific cognitive functions. Neuroergonomics shows that considering what makes work possible - the human brain - can enrich understanding of the use of technology by humans and can inform technological design. Applications of neuroergonomics include the assessment of operator workload and vigilance, implementation of real-time adaptive automation, neuroengineering for people with disabilities, and design of selection and training methods.
Patient-based radiographic exposure factor selection: a systematic review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, William; Robinson, John; McEntee, Mark, E-mail: mark.mcentee@sydney.edu.au
Digital technology has wider exposure latitude and post-processing algorithms which can mask the evidence of underexposure and overexposure. Underexposure produces noisy, grainy images which can impede diagnosis and overexposure results in a greater radiation dose to the patient. These exposure errors can result from inaccurate adjustment of exposure factors in response to changes in patient thickness. This study aims to identify all published radiographic exposure adaptation systems which have been, or are being, used in general radiography and discuss their applicability to digital systems. Studies in EMBASE, MEDLINE, CINAHL and SCOPUS were systematically reviewed. Some of the search terms usedmore » were exposure adaptation, exposure selection, exposure technique, 25% rule, 15% rule, DuPont™ Bit System and radiography. A manual journal-specific search was also conducted in The Radiographer and Radiologic Technology. Studies were included if they demonstrated a system of altering exposure factors to compensate for variations in patients for general radiography. Studies were excluded if they focused on finding optimal exposures for an ‘average’ patient or focused on the relationship between exposure factors and dose. The database search uncovered 11 articles and the journal-specific search uncovered 13 articles discussing systems of exposure adaptation. They can be categorised as simple one-step guidelines, comprehensive charts and computer programs. Only two papers assessed the efficacy of exposure adjustment systems. No literature compares the efficacy of exposure adaptations system for film/screen radiography with digital radiography technology nor is there literature on a digital specific exposure adaptation system.« less
Big data mining analysis method based on cloud computing
NASA Astrophysics Data System (ADS)
Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao
2017-08-01
Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.
Computer Applications in Health Science Education.
Juanes, Juan A; Ruisoto, Pablo
2015-09-01
In recent years, computer application development has experienced exponential growth, not only in the number of publications but also in the scope or contexts that have benefited from its use. In health science training, and medicine specifically, the gradual incorporation of technological developments has transformed the teaching and learning process, resulting in true "educational technology". The goal of this paper is to review the main features involved in these applications and highlight the main lines of research for the future. The results of peer reviewed literature published recently indicate the following features shared by the key technological developments in the field of health science education: first, development of simulation and visualization systems for a more complete and realistic representation of learning material over traditional paper format; second, portability and versatility of the applications, adapted for an increasing number of devices and operative systems; third, increasing focus on open source applications such as Massive Open Online Course (MOOC).
Meeting the challenges--the role of medical informatics in an ageing society.
Koch, Sabine
2006-01-01
The objective of this paper is to identify trends and new technological developments that appear due to an ageing society and to relate them to current research in the field of medical informatics. A survey of the current literature reveals that recent technological advances have been made in the fields of "telecare and home-monitoring", "smart homes and robotics" and "health information systems and knowledge management". Innovative technologies such as wearable devices, bio- and environmental sensors and mobile, humanoid robots do already exist and ambient assistant living environments are being created for an ageing society. However, those technologies have to be adapted to older people's self-care processes and coping strategies, and to support new ways of healthcare delivery. Medical informatics can support this process by providing the necessary information infrastructure, contribute to standardisation, interoperability and security issues and provide modelling and simulation techniques for educational purposes. Research fields of increasing importance with regard to an ageing society are, moreover, the fields of knowledge management, ubiquitous computing and human-computer interaction.
Efficient utilization of graphics technology for space animation
NASA Technical Reports Server (NTRS)
Panos, Gregory Peter
1989-01-01
Efficient utilization of computer graphics technology has become a major investment in the work of aerospace engineers and mission designers. These new tools are having a significant impact in the development and analysis of complex tasks and procedures which must be prepared prior to actual space flight. Design and implementation of useful methods in applying these tools has evolved into a complex interaction of hardware, software, network, video and various user interfaces. Because few people can understand every aspect of this broad mix of technology, many specialists are required to build, train, maintain and adapt these tools to changing user needs. Researchers have set out to create systems where an engineering designer can easily work to achieve goals with a minimum of technological distraction. This was accomplished with high-performance flight simulation visual systems and supercomputer computational horsepower. Control throughout the creative process is judiciously applied while maintaining generality and ease of use to accommodate a wide variety of engineering needs.
FRIEND: a brain-monitoring agent for adaptive and assistive systems.
Morris, Alexis; Ulieru, Mihaela
2012-01-01
This paper presents an architectural design for adaptive-systems agents (FRIEND) that use brain state information to make more effective decisions on behalf of a user; measuring brain context versus situational demands. These systems could be useful for alerting users to cognitive workload levels or fatigue, and could attempt to compensate for higher cognitive activity by filtering noise information. In some cases such systems could also share control of devices, such as pulling over in an automated vehicle. These aim to assist people in everyday systems to perform tasks better and be more aware of internal states. Achieving a functioning system of this sort is a challenge, involving a unification of brain- computer-interfaces, human-computer-interaction, soft-computin deliberative multi-agent systems disciplines. Until recently, these were not able to be combined into a usable platform due largely to technological limitations (e.g., size, cost, and processing speed), insufficient research on extracting behavioral states from EEG signals, and lack of low-cost wireless sensing headsets. We aim to surpass these limitations and develop control architectures for making sense of brain state in applications by realizing an agent architecture for adaptive (human-aware) technology. In this paper we present an early, high-level design towards implementing a multi-purpose brain-monitoring agent system to improve user quality of life through the assistive applications of psycho-physiological monitoring, noise-filtering, and shared system control.
Audio-Enhanced Tablet Computers to Assess Children's Food Frequency From Migrant Farmworker Mothers.
Kilanowski, Jill F; Trapl, Erika S; Kofron, Ryan M
2013-06-01
This study sought to improve data collection in children's food frequency surveys for non-English speaking immigrant/migrant farmworker mothers using audio-enhanced tablet computers (ATCs). We hypothesized that by using technological adaptations, we would be able to improve data capture and therefore reduce lost surveys. This Food Frequency Questionnaire (FFQ), a paper-based dietary assessment tool, was adapted for ATCs and assessed consumption of 66 food items asking 3 questions for each food item: frequency, quantity of consumption, and serving size. The tablet-based survey was audio enhanced with each question "read" to participants, accompanied by food item images, together with an embedded short instructional video. Results indicated that respondents were able to complete the 198 questions from the 66 food item FFQ on ATCs in approximately 23 minutes. Compared with paper-based FFQs, ATC-based FFQs had less missing data. Despite overall reductions in missing data by use of ATCs, respondents still appeared to have difficulty with question 2 of the FFQ. Ability to score the FFQ was dependent on what sections missing data were located. Unlike the paper-based FFQs, no ATC-based FFQs were unscored due to amount or location of missing data. An ATC-based FFQ was feasible and increased ability to score this survey on children's food patterns from migrant farmworker mothers. This adapted technology may serve as an exemplar for other non-English speaking immigrant populations.
Hale, Leigh A; Satherley, Jessica A; McMillan, Nicole J; Milosavljevic, Stephan; Hijmans, Juha M; King, Marcus J
2012-01-01
This article reports on the perceptions of 14 adults with chronic stroke who participated in a pilot study to determine the utility, acceptability, and potential efficacy of using an adapted CyWee Z handheld game controller to play a variety of computer games aimed at improving upper-limb function. Four qualitative in-depth interviews and two focus groups explored participant perceptions. Data were thematically analyzed with the general inductive approach. Participants enjoyed playing the computer games with the technology. The perceived benefits included improved upper-limb function, concentration, and balance; however, six participants reported shoulder and/or arm pain or discomfort, which presented while they were engaged in play but appeared to ease during rest. Participants suggested changes to the games and provided opinions on the use of computer games in rehabilitation. Using an adapted CyWee Z controller and computer games in upper-limb rehabilitation for people with chronic stroke is an acceptable and potentially beneficial adjunct to rehabilitation. The development of shoulder pain was a negative side effect for some participants and requires further investigation.
Carmichael, Clare; Carmichael, Patrick
2014-01-01
This paper highlights aspects related to current research and thinking about ethical issues in relation to Brain Computer Interface (BCI) and Brain-Neuronal Computer Interfaces (BNCI) research through the experience of one particular project, BrainAble, which is exploring and developing the potential of these technologies to enable people with complex disabilities to control computers. It describes how ethical practice has been developed both within the multidisciplinary research team and with participants. The paper presents findings in which participants shared their views of the project prototypes, of the potential of BCI/BNCI systems as an assistive technology, and of their other possible applications. This draws attention to the importance of ethical practice in projects where high expectations of technologies, and representations of "ideal types" of disabled users may reinforce stereotypes or drown out participant "voices". Ethical frameworks for research and development in emergent areas such as BCI/BNCI systems should be based on broad notions of a "duty of care" while being sufficiently flexible that researchers can adapt project procedures according to participant needs. They need to be frequently revisited, not only in the light of experience, but also to ensure they reflect new research findings and ever more complex and powerful technologies.
Adapting to Change in a Master Level Real-World-Projects Capstone Course
ERIC Educational Resources Information Center
Tappert, Charles C.; Stix, Allen
2012-01-01
Our mission of capstone computing courses for the past ten years has been to offer students experience with the development of real-world information technology projects. This experience has included both the hard and soft skills required for the work they could expect as industrial practitioners. Hard skills entail extending one's knowledge…
Librarians on the Loose: Breaking out of the Library to Create a Culture of Literacy
ERIC Educational Resources Information Center
Lawrence, Ellen
2014-01-01
With the influx of technology, school library programs started to adapt traditional services by embracing new innovative resources. However, even with online research databases and cloud computing, most school libraries still occupy a fixed physical space in the school, a destination for students and teachers when they need help with reading or…
A Long-Term Model for the Curriculum of Training for an Electric-Power Specialist
ERIC Educational Resources Information Center
Venikov, V. A.
1978-01-01
Long-term planning for professional training of electric-power specialists in Russia will have to (1) recognize the need for specialists to adapt to unforeseen developments in the field, (2) include new mathematics, physics, and computer technology, and (3) be prepared for changes in methods of production and transformation of energy. (AV)
ERIC Educational Resources Information Center
Pan, Edward A.
2013-01-01
Science, technology, engineering, and mathematics (STEM) education is a national focus. Engineering education, as part of STEM education, needs to adapt to meet the needs of the nation in a rapidly changing world. Using computer-based visualization tools and corresponding 3D printed physical objects may help nontraditional students succeed in…
Visualization and modeling of smoke transport over landscape scales
Glenn P. Forney; William Mell
2007-01-01
Computational tools have been developed at the National Institute of Standards and Technology (NIST) for modeling fire spread and smoke transport. These tools have been adapted to address fire scenarios that occur in the wildland urban interface (WUI) over kilometer-scale distances. These models include the smoke plume transport model ALOFT (A Large Open Fire plume...
Contemporary Youth and the Postmodern Adventure
ERIC Educational Resources Information Center
Best, Steven; Kellner, Douglas
2003-01-01
Contemporary youth are major players in the postmodern adventure because it is they who will enter the future and further shape the world to come. For youth today, change is the name of the game and they are forced to adapt to a rapidly mutating and crisis-ridden world characterized by novel information, computer and genetic technologies; a…
Trusted computation through biologically inspired processes
NASA Astrophysics Data System (ADS)
Anderson, Gustave W.
2013-05-01
Due to supply chain threats it is no longer a reasonable assumption that traditional protections alone will provide sufficient security for enterprise systems. The proposed cognitive trust model architecture extends the state-of-the-art in enterprise anti-exploitation technologies by providing collective immunity through backup and cross-checking, proactive health monitoring and adaptive/autonomic threat response, and network resource diversity.
Borgestig, Maria; Falkmer, Torbjörn; Hemmingsson, Helena
2013-11-01
The aim of this study was to evaluate the effect of an assistive technology (AT) intervention to improve the use of available computers as assistive technology in educational tasks for students with physical disabilities during an ongoing school year. Fifteen students (aged 12-18) with physical disabilities, included in mainstream classrooms in Sweden, and their teachers took part in the intervention. Pre-, post-, and follow-up data were collected with Goal Attainment Scaling (GAS), a computer usage diary, and with the Psychosocial Impact of Assistive Devices Scale (PIADS). Teachers' opinions of goal setting were collected at follow-up. The intervention improved the goal-related computer usage in educational tasks and teachers reported they would use goal setting again when appropriate. At baseline, students reported a positive impact from computer usage with no differences over time regarding the PIADS subscales independence, adaptability, or self-esteem. The AT intervention showed a positive effect on computer usage as AT in mainstream schools. Some additional support to teachers is recommended as not all students improved in all goal-related computer usage. A clinical implication is that students' computer usage can be improved and collaboratively established computer-based strategies can be carried out by teachers in mainstream schools.
Minkara, Mona S; Weaver, Michael N; Gorske, Jim; Bowers, Clifford R; Merz, Kenneth M
2015-08-11
There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth year graduate student in computational chemistry at the University of Florida. She is also blind. This account presents efforts conducted by an expansive team of university and student personnel in conjunction with Mona to adapt different portions of the graduate student curriculum to meet Mona's needs. The most important consideration is prior preparation of materials to assist with coursework and cumulative exams. Herein we present an account of the first four years of Mona's graduate experience hoping this will assist in the development of protocols for future blind and low-vision graduate students in computational chemistry.
Kocaağaoğlu, Hasan; Albayrak, Haydar; Kilinc, Halil Ibrahim; Gümüs, Hasan Önder
2017-11-01
The use of computer-aided design and computer-aided manufacturing (CAD-CAM) for metal-ceramic restorations has increased with advances in the technology. However, little is known about the marginal and internal adaptation of restorations fabricated using laser sintering (LS) and soft milling (SM). Moreover, the effects of repeated ceramic firings on the marginal and internal adaptation of metal-ceramic restorations fabricated with LS and SM is also unknown. The purpose of this in vitro study was to investigate the effects of repeated ceramic firings on the marginal and internal adaptation of metal-ceramic copings fabricated using the lost wax (LW), LS, and SM techniques. Ten LW, 10 LS, and 10 SM cobalt-chromium (Co-Cr) copings were fabricated for an artificial tooth (Frasaco GmbH). After the application of veneering ceramic (VITA VMK Master; VITA Zahnfabrik), the marginal and internal discrepancies of these copings were measured with a silicone indicator paste and a stereomicroscope at ×100 magnification after the first, second, and third clinical simulated ceramic firing cycles. Repeated measures 2-way ANOVA and the Fisher LSD post hoc test were used to evaluate differences in marginal and internal discrepancies (α=.05). Neither fabrication protocol nor repeated ceramic firings had any statistically significant effect on internal discrepancy values (P>.05). Marginal discrepancy values were also statistically unaffected by repeated ceramic firings (P>.05); however, the fabrication protocol had a significant effect on marginal discrepancy values (P<.001), with LW resulting in higher marginal discrepancy values than LS or SM (P<.05). Marginal discrepancy values did not vary between LS and SM (P>.05). All groups demonstrated clinically acceptable marginal adaptation after repeated ceramic firing cycles; however, the LS and SM groups demonstrated better marginal adaptation than that of LW group and may be appropriate clinical alternatives to LW. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Mental models, metaphors and their use in the education of nurses.
Burke, L M; Wilson, A M
1997-11-01
A great deal of nurses' confidence in the use of information technology (IT) depends both on the way computers are introduced to students in the college and how such education is continued and applied when they are practitioners. It is therefore vital that teachers of IT assist nurses to discover ways of learning to utilize and apply computers within their workplace with whatever methods are available. One method which has been introduced with success in other fields is the use of mental models and metaphors. Mental models and metaphors enable individuals to learn by building on past learning. Concepts and ideas which have already been internalized from past experience can be transferred and adapted for usage in a new learning situation with computers and technology. This article explores the use of mental models and metaphors for the technological education of nurses. The concepts themselves will be examined, followed by suggestions for possible applications specifically in the field of nursing and health care. Finally the role of the teacher in enabling improved learning as a result of these techniques will be addressed.
Progress in hyperspectral imaging of vegetation
NASA Astrophysics Data System (ADS)
Goetz, Alexander F. H.
2001-03-01
Computer-related technologies, such as the Internet, have posed new challenges for intellectual property law. Legislation and court decisions impacting patents, copyrights, trade secrets and trademarks have adapted intellectual property law to address new issues brought about by such emerging technologies. As the pace of technological change continues to increase, intellectual property law will need to keep up. Accordingly, the balance struck by intellectual property laws today will likely be set askew by technological changes in the future. Engineers need to consider not only the law as it exists today, but also how it might change in the future. Likewise, lawyers and judges need to consider legal issues not only in view of the current state of the art in technology, but also with an eye to technologies yet to come.
H-P adaptive methods for finite element analysis of aerothermal loads in high-speed flows
NASA Technical Reports Server (NTRS)
Chang, H. J.; Bass, J. M.; Tworzydlo, W.; Oden, J. T.
1993-01-01
The commitment to develop the National Aerospace Plane and Maneuvering Reentry Vehicles has generated resurgent interest in the technology required to design structures for hypersonic flight. The principal objective of this research and development effort has been to formulate and implement a new class of computational methodologies for accurately predicting fine scale phenomena associated with this class of problems. The initial focus of this effort was to develop optimal h-refinement and p-enrichment adaptive finite element methods which utilize a-posteriori estimates of the local errors to drive the adaptive methodology. Over the past year this work has specifically focused on two issues which are related to overall performance of a flow solver. These issues include the formulation and implementation (in two dimensions) of an implicit/explicit flow solver compatible with the hp-adaptive methodology, and the design and implementation of computational algorithm for automatically selecting optimal directions in which to enrich the mesh. These concepts and algorithms have been implemented in a two-dimensional finite element code and used to solve three hypersonic flow benchmark problems (Holden Mach 14.1, Edney shock on shock interaction Mach 8.03, and the viscous backstep Mach 4.08).
NASA Astrophysics Data System (ADS)
Binboğa, Elif; Korhan, Orhan
2014-10-01
Educational ergonomics focuses on the interaction between educational performance and educational design. By improving the design or pointing out the possible problems, educational ergonomics can be utilized to have positive impacts on the student performance and thus on education process. Laptops and tablet computers are becoming widely used by school children and beginning to be used effectively for educational purposes. As the latest generation of laptops and tablet computers are mobile and lightweight compared to conventional personal computers, they support student-centred interaction-based learning. However, these technologies have been introduced into schools with minimal adaptations to furniture or attention to ergonomics. There are increasing reports of an association between increased musculoskeletal (MSK) problems in children and use of such technologies. Although children are among the users of laptops and tablet computers both in their everyday lives and at schools, the literature investigating MSK activities and possible MSK discomfort regarding children using portable technologies is limited. This study reviews the literature to identify published studies that investigated posture, MSK activities, and possible MSK discomfort among children using mobile technologies (laptops or tablet computers) for educational purposes. An electronic search of the literature published in English between January 1994 and January 2014 was performed in several databases. The literature search terms were identified and combined to search the databases. The search results that the resources investigating MSK outcomes of laptop or tablet use of children are very scarce. This review points out the research gaps in this field, and identifying areas for future studies.
Real time AI expert system for robotic applications
NASA Technical Reports Server (NTRS)
Follin, John F.
1987-01-01
A computer controlled multi-robot process cell to demonstrate advanced technologies for the demilitarization of obsolete chemical munitions was developed. The methods through which the vision system and other sensory inputs were used by the artificial intelligence to provide the information required to direct the robots to complete the desired task are discussed. The mechanisms that the expert system uses to solve problems (goals), the different rule data base, and the methods for adapting this control system to any device that can be controlled or programmed through a high level computer interface are discussed.
NASA Technical Reports Server (NTRS)
Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric
2004-01-01
Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.
Ten years of CLIVE (Computer-Aided Learning in Veterinary Education) in the United Kingdom.
Dale, Vicki H M; McConnell, Gill; Short, Andrew; Sullivan, Martin
2005-01-01
This paper outlines the work of the CLIVE (Computer-Aided Learning in Veterinary Education) project over a 10-year period, set against the backdrop of changes in education policy and learning technology developments. The consortium of six UK veterinary schools and 14 international Associate Member Schools has been very successful. Sustaining these partnerships requires that the project redefine itself and adapt to cater to the diverse learning needs of today's students and to changing professional and societal needs on an international scale.
Irinoye, Omolola O; Ayandiran, Emmanuel Olufemi; Fakunle, Imoleayo; Mtshali, Ntombifikile
2013-08-01
The impact of information technology on nursing has been a subject of discourse for the latter half of the 20th century and the early part of the 21st. Despite its obvious benefits, adapting information technology to healthcare has been relatively difficult, and rates of use have been limited especially in many developing countries. This quantitative study has shown a generally low usage of information technology among nurses in the study setting. Many of the nurses adjudged themselves as novice in information technology, with 37.8% stating that they had never had formal training in information technology and many rating themselves as possessing little or no skill in the use of spreadsheet, databases, and so on. Many (55.6%) stated that they do not have access to information technology despite the fairly widespread satisfactory perception established among them. Results further showed that unreliable network connections, high work demand, inadequate number of computers, poor access to computers consequent on wrong locations, and poor system design with associated failure to fit work demands are some of the major barriers to the use of information technology in the study setting. These factors therefore need to be taken into consideration in any intervention that seeks to improve the nurses' use of information technology in clinical setting.
Distributed computing environments for future space control systems
NASA Technical Reports Server (NTRS)
Viallefont, Pierre
1993-01-01
The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon
2017-02-28
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
NASA Astrophysics Data System (ADS)
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon
2017-02-01
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin
2009-07-01
This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.
Context Aware Systems, Methods and Trends in Smart Home Technology
NASA Astrophysics Data System (ADS)
Robles, Rosslin John; Kim, Tai-Hoon
Context aware applications respond and adapt to changes in the computing environment. It is the concept of leveraging information about the end user to improve the quality of the interaction. New technologies in context-enriched services will use location, presence, social attributes, and other environmental information to anticipate an end user's immediate needs, offering more-sophisticated, situation-aware and usable functions. Smart homes connect all the devices and appliances in your home so they can communicate with each other and with you. Context-awareness can be applied to Smart Home technology. In this paper, we discuss the context-aware tools for development of Smart Home Systems.
Biological ageing and clinical consequences of modern technology.
Kyriazis, Marios
2017-08-01
The pace of technology is steadily increasing, and this has a widespread effect on all areas of health and society. When we interact with this technological environment we are exposed to a wide variety of new stimuli and challenges, which may modulate the stress response and thus change the way we respond and adapt. In this Opinion paper I will examine certain aspects of the human-computer interaction with regards to health and ageing. There are practical, everyday effects which also include social and cultural elements. I will discuss how human evolution may be affected by this new environmental change (the hormetic immersion in a virtual/technological environment). Finally, I will also explore certain biological aspects which have direct relevance to the ageing human. By embracing new technologies and engaging with a techno-social ecosystem (which is no longer formed by several interacting species, but by just two main elements: humans and machines), we may be subjected to beneficial hormetic effects, which upregulate the stress response and modulate adaptation. This is likely to improve overall health as we age and, as I speculate here, may also result in the reduction of age-related dysfunction.
Computer-aided navigation in dental implantology: 7 years of clinical experience.
Ewers, Rolf; Schicho, Kurt; Truppe, Michael; Seemann, Rudolf; Reichwein, Astrid; Figl, Michael; Wagner, Arne
2004-03-01
This long-term study gives a review over 7 years of research, development, and routine clinical application of computer-aided navigation technology in dental implantology. Benefits and disadvantages of up-to-date technologies are discussed. In the course of the current advancement, various hardware and software configurations are used. In the initial phase, universally applicable navigation software is adapted for implantology. Since 2001, a special software module for dental implantology is available. Preoperative planning is performed on the basis of prosthetic aspects and requirements. In clinical routine use, patient and drill positions are intraoperatively registered by means of optoelectronic tracking systems; during preclinical tests, electromagnetic trackers are also used. In 7 years (1995 to 2002), 55 patients with 327 dental implants were successfully positioned with computer-aided navigation technology. The mean number of implants per patient was 6 (minimum, 1; maximum, 11). No complications were observed; the preoperative planning could be exactly realized. The average expenditure of time for the preparation of a surgical intervention with navigation decreased from 2 to 3 days in the initial phase to one-half day in clinical routine use with software that is optimized for dental implantology. The use of computer-aided navigation technology can contribute to considerable quality improvement. Preoperative planning is exactly realized and intraoperative safety is increased, because damage to nerves or neighboring teeth can be avoided.
Biomechanics of Early Cardiac Development
Goenezen, Sevan; Rennie, Monique Y.
2012-01-01
Biomechanics affect early cardiac development, from looping to the development of chambers and valves. Hemodynamic forces are essential for proper cardiac development, and their disruption leads to congenital heart defects. A wealth of information already exists on early cardiac adaptations to hemodynamic loading, and new technologies, including high resolution imaging modalities and computational modeling, are enabling a more thorough understanding of relationships between hemodynamics and cardiac development. Imaging and modeling approaches, used in combination with biological data on cell behavior and adaptation, are paving the road for new discoveries on links between biomechanics and biology and their effect on cardiac development and fetal programming. PMID:22760547
Strategic Adaptation of SCA for STRS
NASA Technical Reports Server (NTRS)
Quinn, Todd; Kacpura, Thomas
2007-01-01
The Space Telecommunication Radio System (STRS) architecture is being developed to provide a standard framework for future NASA space radios with greater degrees of interoperability and flexibility to meet new mission requirements. The space environment imposes unique operational requirements with restrictive size, weight, and power constraints that are significantly smaller than terrestrial-based military communication systems. With the harsh radiation environment of space, the computing and processing resources are typically one or two generations behind current terrestrial technologies. Despite these differences, there are elements of the SCA that can be adapted to facilitate the design and implementation of the STRS architecture.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Towards Contextualized Learning Services
NASA Astrophysics Data System (ADS)
Specht, Marcus
Personalization of feedback and instruction has often been considered as a key feature in learning support. The adaptations of the instructional process to the individual and its different aspects have been investigated from different research perspectives as learner modelling, intelligent tutoring systems, adaptive hypermedia, adaptive instruction and others. Already in the 1950s first commercial systems for adaptive instruction for trainings of keyboard skills have been developed utilizing adaptive configuration of feedback based on user performance and interaction footprints (Pask 1964). Around adaptive instruction there is a variety of research issues bringing together interdisciplinary research from computer science, engineering, psychology, psychotherapy, cybernetics, system dynamics, instructional design, and empirical research on technology enhanced learning. When classifying best practices of adaptive instruction different parameters of the instructional process have been identified which are adapted to the learner, as: sequence and size of task difficulty, time of feedback, pace of learning speed, reinforcement plan and others these are often referred to the adaptation target. Furthermore Aptitude Treatment Interaction studies explored the effect of adapting instructional parameters to different characteristics of the learner (Tennyson and Christensen 1988) as task performance, personality characteristics, or cognitive abilities, this is information is referred to as adaptation mean.
Faculty experiences with providing online courses. Thorns among the roses.
Cravener, P A
1999-01-01
This article presents a review of the literature summarizing faculty reports of their experiences with computer-mediated distance education compared with their traditional face-to-face teaching experiences. Both challenges and benefits of distance learning programs contrasted with classroom-based teaching are revealed. Specific difficulties and advantages identified by online faculty were categorized into four broad areas of impact on the teaching/learning experience: (a) faculty workload, (b) access to education, (c) adapting to technology, and (d) instructional quality. Challenges appear to be related predominantly to faculty workloads, new technologies, and online course management. Benefits identified by online educators indicate that computer-mediated distance education has high potential for expanding student access to educational resources, for providing individualized instruction, and for promoting active learning among geographically separated members of learning groups.
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.
Tutoring the Elderly on the Use of Recommending Systems
ERIC Educational Resources Information Center
Savvopoulos, Anastasios; Virvou, Maria
2010-01-01
Purpose: The elderly are often unfamiliar with computer technology and can encounter great difficulties. Moreover, the terms used in such systems may prove to be a challenge for these users. The aim of this research is to tutor the elderly on using an adaptive e-shop system in order to buy products easily. Design/methodology/approach: In view of…
ERIC Educational Resources Information Center
Kline, Terence R.; Kneen, Harold; Barrett, Eric; Kleinschmidt, Andy; Doohan, Doug
2012-01-01
Differences in vegetable production methods utilized by American growers create distinct challenges for Extension personnel providing food safety training to producer groups. A program employing computers and projectors will not be accepted by an Amish group that does not accept modern technology. We have developed an outreach program that covers…
ERIC Educational Resources Information Center
Möller, Karla J.
2015-01-01
For over two decades, the field of children's literature has been incorporating more digital technologies into publication of and access to texts. From early computer and CD-ROM adaptations of print picturebooks to the extensive visual and aural interactivity of the newest literature apps, what and how children read has changed significantly in…
Fager, Susan Koch; Burnfield, Judith M
2014-03-01
To understand individuals' perceptions of technology use during inpatient rehabilitation. A qualitative phenomenological study using semi-structured interviews of 10 individuals with diverse underlying diagnoses and/or a close family member who participated in inpatient rehabilitation. Core themes focused on assistive technology usage (equipment set-up, reliability and fragility of equipment, expertise required to use assistive technology and use of mainstream technologies) and opportunities for using technology to increase therapeutic engagement (opportunities for practice outside of therapy, goals for therapeutic exercises and technology for therapeutic exercises: motivation and social interaction). Interviews revealed the need for durable, reliable and intuitive technology without requiring a high level of expertise to install and implement. A strong desire for the continued use of mainstream devices (e.g. cell phones, tablet computers) reinforces the need for a wider range of access options for those with limited physical function. Finally, opportunities to engage in therapeutically meaningful activities beyond the traditional treatment hours were identified as valuable for patients to not only improve function but to also promote social interaction. Assistive technology increases functional independence of severely disabled individuals. End-users (patients and families) identified a need for designs that are durable, reliable, intuitive, easy to consistently install and use. Technology use (adaptive or commercially available) provides a mechanism to extend therapeutic practice beyond the traditional therapy day. Adapting skeletal tracking technology used in gaming software could automate exercise tracking, documentation and feedback for patient motivation and clinical treatment planning and interventions.
Computing in Hydraulic Engineering Education
NASA Astrophysics Data System (ADS)
Duan, J. G.
2011-12-01
Civil engineers, pioneers of our civilization, are rarely perceived as leaders and innovators in modern society because of retardations in technology innovation. This crisis has resulted in the decline of the prestige of civil engineering profession, reduction of federal funding on deteriorating infrastructures, and problems with attracting the most talented high-school students. Infusion of cutting-edge computer technology and stimulating creativity and innovation therefore are the critical challenge to civil engineering education. To better prepare our graduates to innovate, this paper discussed the adaption of problem-based collaborative learning technique and integration of civil engineering computing into a traditional civil engineering curriculum. Three interconnected courses: Open Channel Flow, Computational Hydraulics, and Sedimentation Engineering, were developed with emphasis on computational simulations. In Open Channel flow, the focuses are principles of free surface flow and the application of computational models. This prepares students to the 2nd course, Computational Hydraulics, that introduce the fundamental principles of computational hydraulics, including finite difference and finite element methods. This course complements the Open Channel Flow class to provide students with in-depth understandings of computational methods. The 3rd course, Sedimentation Engineering, covers the fundamentals of sediment transport and river engineering, so students can apply the knowledge and programming skills gained from previous courses to develop computational models for simulating sediment transport. These courses effectively equipped students with important skills and knowledge to complete thesis and dissertation research.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Schueller, Stephen Matthew
2017-01-01
Background Positive psychological interventions for children have typically focused on direct adaptations of interventions developed for adults. As the community moves toward designing positive computing technologies to support child well-being, it is important to use a more participatory process that directly engages children’s voices. Objective Our objectives were, through a participatory design study, to understand children’s interpretations of positive psychology concepts, as well as their perspectives on technologies that are best suited to enhance their engagement with practice of well-being skills. Methods We addressed these questions through a content analysis of 434 design ideas, 51 sketches, and 8 prototype and videos, which emerged from a 14-session cooperative inquiry study with 12 child “happiness inventors.” The study was part of a summer learning camp held at the children’s middle school, which focused on teaching the invention process, teaching well-being skills drawn from positive psychology and related areas (gratitude, mindfulness, and problem solving), and iterating design ideas for technologies to support these skills. Results The children’s ideas and prototypes revealed specific facets of how they interpreted gratitude (as thanking, being positive, and doing good things), mindfulness (as externally representing thought and emotions, controlling those thoughts and emotions, getting through unpleasant things, and avoiding forgetting something), and problem solving (as preventing bad decisions, seeking alternative solutions, and not dwelling on unproductive thoughts). This process also revealed that children emphasized particular technologies in their solutions. While desktop or laptop solutions were notably lacking, other ideas were roughly evenly distributed between mobile apps and embodied computing technologies (toys, wearables, etc). We also report on desired functionalities and approaches to engagement in the children’s ideas, such as a notable emphasis on representing and responding to internal states. Conclusions Our findings point to promising directions for the design of positive computing technologies targeted at children, with particular emphases on the perspectives, technologies, engagement approaches, and functionalities that appealed to the children in our study. The dual focus of the study on teaching skills while designing technologies is a novel methodology in the design of positive computing technologies intended to increase child well-being. PMID:28096066
Cyberpsychology: a human-interaction perspective based on cognitive modeling.
Emond, Bruno; West, Robert L
2003-10-01
This paper argues for the relevance of cognitive modeling and cognitive architectures to cyberpsychology. From a human-computer interaction point of view, cognitive modeling can have benefits both for theory and model building, and for the design and evaluation of sociotechnical systems usability. Cognitive modeling research applied to human-computer interaction has two complimentary objectives: (1) to develop theories and computational models of human interactive behavior with information and collaborative technologies, and (2) to use the computational models as building blocks for the design, implementation, and evaluation of interactive technologies. From the perspective of building theories and models, cognitive modeling offers the possibility to anchor cyberpsychology theories and models into cognitive architectures. From the perspective of the design and evaluation of socio-technical systems, cognitive models can provide the basis for simulated users, which can play an important role in usability testing. As an example of application of cognitive modeling to technology design, the paper presents a simulation of interactive behavior with five different adaptive menu algorithms: random, fixed, stacked, frequency based, and activation based. Results of the simulation indicate that fixed menu positions seem to offer the best support for classification like tasks such as filing e-mails. This research is part of the Human-Computer Interaction, and the Broadband Visual Communication research programs at the National Research Council of Canada, in collaboration with the Carleton Cognitive Modeling Lab at Carleton University.
Audio-Enhanced Tablet Computers to Assess Children’s Food Frequency From Migrant Farmworker Mothers
Kilanowski, Jill F.; Trapl, Erika S.; Kofron, Ryan M.
2014-01-01
This study sought to improve data collection in children’s food frequency surveys for non-English speaking immigrant/migrant farmworker mothers using audio-enhanced tablet computers (ATCs). We hypothesized that by using technological adaptations, we would be able to improve data capture and therefore reduce lost surveys. This Food Frequency Questionnaire (FFQ), a paper-based dietary assessment tool, was adapted for ATCs and assessed consumption of 66 food items asking 3 questions for each food item: frequency, quantity of consumption, and serving size. The tablet-based survey was audio enhanced with each question “read” to participants, accompanied by food item images, together with an embedded short instructional video. Results indicated that respondents were able to complete the 198 questions from the 66 food item FFQ on ATCs in approximately 23 minutes. Compared with paper-based FFQs, ATC-based FFQs had less missing data. Despite overall reductions in missing data by use of ATCs, respondents still appeared to have difficulty with question 2 of the FFQ. Ability to score the FFQ was dependent on what sections missing data were located. Unlike the paper-based FFQs, no ATC-based FFQs were unscored due to amount or location of missing data. An ATC-based FFQ was feasible and increased ability to score this survey on children’s food patterns from migrant farmworker mothers. This adapted technology may serve as an exemplar for other non-English speaking immigrant populations. PMID:25343004
NASA Technical Reports Server (NTRS)
Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John
1994-01-01
This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.
Enhancing Care of Aged and Dying Prisoners: Is e-Learning a Feasible Approach?
Loeb, Susan J; Penrod, Janice; Myers, Valerie H; Baney, Brenda L; Strickfaden, Sophia M; Kitt-Lewis, Erin; Wion, Rachel K
Prisons and jails are facing sharply increased demands in caring for aged and dying inmates. Our Toolkit for Enhancing End-of-life Care in Prisons effectively addressed end-of-life (EOL) care; however, geriatric content was limited, and the product was not formatted for broad dissemination. Prior research adapted best practices in EOL care and aging; but, delivery methods lacked emerging technology-focused learning and interactivity. Our purposes were to uncover current training approaches and preferences and to ascertain the technological capacity of correctional settings to deliver computer-based and other e-learning training. An environmental scan was conducted with 11 participants from U.S. prisons and jails to ensure proper fit, in terms of content and technology capacity, between an envisioned computer-based training product and correctional settings. Environmental scan findings focused on content of training, desirable qualities of training, prominence of "homegrown" products, and feasibility of commercial e-learning. This study identified qualities of training programs to adopt and pitfalls to avoid and revealed technology-related issues to be mindful of when designing computer-based training for correctional settings, and participants spontaneously expressed an interest in geriatrics and EOL training using this learning modality as long as training allowed for tailoring of materials.
Better informed in clinical practice - a brief overview of dental informatics.
Reynolds, P A; Harper, J; Dunne, S
2008-03-22
Uptake of dental informatics has been hampered by technical and user issues. Innovative systems have been developed, but usability issues have affected many. Advances in technology and artificial intelligence are now producing clinically useful systems, although issues still remain with adapting computer interfaces to the dental practice working environment. A dental electronic health record has become a priority in many countries, including the UK. However, experience shows that any dental electronic health record (EHR) system cannot be subordinate to, or a subset of, a medical record. Such a future dental EHR is likely to incorporate integrated care pathways. Future best dental practice will increasingly depend on computer-based support tools, although disagreement remains about the effectiveness of current support tools. Over the longer term, future dental informatics tools will incorporate dynamic, online evidence-based medicine (EBM) tools, and promise more adaptive, patient-focused and efficient dental care with educational advantages in training.
Varela, Gervasio; Paz-Lopez, Alejandro; Becerra, Jose A; Duro, Richard
2016-07-07
This paper addresses the problem of developing user interfaces for Ubiquitous Computing (UC) and Ambient Intelligence (AmI) systems. These kind of systems are expected to provide a natural user experience, considering interaction modalities adapted to the user abilities and preferences and using whatever interaction devices are present in the environment. These interaction devices are not necessarily known at design time. The task is quite complicated due to the variety of devices and technologies, and the diversity of scenarios, and it usually burdens the developer with the need to create many different UIs in order to consider the foreseeable user-environment combinations. Here, we propose an UI abstraction framework for UC and AmI systems that effectively improves the portability of those systems between different environments and for different users. It allows developers to design and implement a single UI capable of being deployed with different devices and modalities regardless the physical location.
Agent-based user-adaptive service provision in ubiquitous systems
NASA Astrophysics Data System (ADS)
Saddiki, H.; Harroud, H.; Karmouch, A.
2012-11-01
With the increasing availability of smartphones, tablets and other computing devices, technology consumers have grown accustomed to performing all of their computing tasks anytime, anywhere and on any device. There is a greater need to support ubiquitous connectivity and accommodate users by providing software as network-accessible services. In this paper, we propose a MAS-based approach to adaptive service composition and provision that automates the selection and execution of a suitable composition plan for a given service. With agents capable of autonomous and intelligent behavior, the composition plan is selected in a dynamic negotiation driven by a utility-based decision-making mechanism; and the composite service is built by a coalition of agents each providing a component necessary to the target service. The same service can be built in variations for catering to dynamic user contexts and further personalizing the user experience. Also multiple services can be grouped to satisfy new user needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas
The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less
Supporting medical communication for older patients with a shared touch-screen computer.
Piper, Anne Marie; Hollan, James D
2013-11-01
Increasingly health care facilities are adopting electronic medical record systems and installing computer workstations in patient exam rooms. The introduction of computer workstations into the medical interview process makes it important to consider the impact of such technology on older patients as well as new types of interfaces that may better suit the needs of older adults. While many older adults are comfortable with a traditional computer workstation with a keyboard and mouse, this article explores how a large horizontal touch-screen (i.e., a surface computer) may suit the needs of older patients and facilitates the doctor-patient interview process. Twenty older adults (age 60 to 88) used a prototype multiuser, multitouch system in our research laboratory to examine seven health care scenarios. Behavioral observations as well as results from questionnaires and a structured interview were analyzed. The older adults quickly adapted to the prototype system and reported that it was easy to use. Participants also suggested that having a shared view of one's medical records, especially charts and images, would enhance communication with their doctor and aid understanding. While this study is exploratory and some areas of interaction with a surface computer need to be refined, the technology is promising for sharing electronic patient information during medical interviews involving older adults. Future work must examine doctors' and nurses' interaction with the technology as well as logistical issues of installing such a system in a real world medical setting. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Computer-aided position planning of miniplates to treat facial bone defects
Wallner, Jürgen; Gall, Markus; Chen, Xiaojun; Schwenzer-Zimmerer, Katja; Reinbacher, Knut; Schmalstieg, Dieter
2017-01-01
In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon’s desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time. PMID:28817607
Computer-aided position planning of miniplates to treat facial bone defects.
Egger, Jan; Wallner, Jürgen; Gall, Markus; Chen, Xiaojun; Schwenzer-Zimmerer, Katja; Reinbacher, Knut; Schmalstieg, Dieter
2017-01-01
In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.
Adaptive Management of Computing and Network Resources for Spacecraft Systems
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)
2000-01-01
It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.
Power System Information Delivering System Based on Distributed Object
NASA Astrophysics Data System (ADS)
Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji
In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.
Cloud based intelligent system for delivering health care as a service.
Kaur, Pankaj Deep; Chana, Inderveer
2014-01-01
The promising potential of cloud computing and its convergence with technologies such as mobile computing, wireless networks, sensor technologies allows for creation and delivery of newer type of cloud services. In this paper, we advocate the use of cloud computing for the creation and management of cloud based health care services. As a representative case study, we design a Cloud Based Intelligent Health Care Service (CBIHCS) that performs real time monitoring of user health data for diagnosis of chronic illness such as diabetes. Advance body sensor components are utilized to gather user specific health data and store in cloud based storage repositories for subsequent analysis and classification. In addition, infrastructure level mechanisms are proposed to provide dynamic resource elasticity for CBIHCS. Experimental results demonstrate that classification accuracy of 92.59% is achieved with our prototype system and the predicted patterns of CPU usage offer better opportunities for adaptive resource elasticity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Technology transfer for adaptation
NASA Astrophysics Data System (ADS)
Biagini, Bonizella; Kuhl, Laura; Gallagher, Kelly Sims; Ortiz, Claudia
2014-09-01
Technology alone will not be able to solve adaptation challenges, but it is likely to play an important role. As a result of the role of technology in adaptation and the importance of international collaboration for climate change, technology transfer for adaptation is a critical but understudied issue. Through an analysis of Global Environment Facility-managed adaptation projects, we find there is significantly more technology transfer occurring in adaptation projects than might be expected given the pessimistic rhetoric surrounding technology transfer for adaptation. Most projects focused on demonstration and early deployment/niche formation for existing technologies rather than earlier stages of innovation, which is understandable considering the pilot nature of the projects. Key challenges for the transfer process, including technology selection and appropriateness under climate change, markets and access to technology, and diffusion strategies are discussed in more detail.
Programming model for distributed intelligent systems
NASA Technical Reports Server (NTRS)
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
1975-12-01
ceases to act as a testing ground for determining user needs when demand is low relative to high initial development costs. Customers are forced to...ANASTASIO Associate Director Data Analysis Research Division Educational Testing Service Princeton, NJ 08540 (609)921-9000 Director of Educational...technology and educational psychology - Indiana University. Research interestes in adaptive, interactive instructional systems. Management
Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations
NASA Technical Reports Server (NTRS)
Patrick, M. Clinton; Thaler, Stephen L.; Stevenson-Chavis, Katherine
2006-01-01
Neural Networks have been under examination for decades in many areas of research, with varying degrees of success and acceptance. Key goals of computer learning, rapid problem solution, and automatic adaptation have been elusive at best. This paper summarizes efforts at NASA's Marshall Space Flight Center harnessing such technology to autonomous space vehicle docking for the purpose of evaluating applicability to future missions.
Supporting 21st-Century Teaching and Learning: The Role of Google Apps for Education (GAFE)
ERIC Educational Resources Information Center
Awuah, Lawrence J.
2015-01-01
The future of higher education is likely to be driven by to the willingness to adapt and grow with the use of technologies in teaching, learning, and research. Google Apps for Education (GAFE) is a powerful cloud-computing solution that works for students regardless of their location, time, or the type of device being used. GAFE is used by…
ERIC Educational Resources Information Center
Minkara, Mona S.; Weaver, Michael N.; Gorske, Jim; Bowers, Clifford R.; Merz, Kenneth M., Jr.
2015-01-01
There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth…
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804
Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.
EngineSim: Turbojet Engine Simulator Adapted for High School Classroom Use
NASA Technical Reports Server (NTRS)
Petersen, Ruth A.
2001-01-01
EngineSim is an interactive educational computer program that allows users to explore the effect of engine operation on total aircraft performance. The software is supported by a basic propulsion web site called the Beginner's Guide to Propulsion, which includes educator-created, web-based activities for the classroom use of EngineSim. In addition, educators can schedule videoconferencing workshops in which EngineSim's creator demonstrates the software and discusses its use in the educational setting. This software is a product of NASA Glenn Research Center's Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas
The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning.
Computational Fluid Dynamics Technology for Hypersonic Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2003-01-01
Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.
2015-01-01
This paper proposes a new adaptive filter for wind generators that combines instantaneous reactive power compensation technology and current prediction controller, and therefore this system is characterized by low harmonic distortion, high power factor, and small DC-link voltage variations during load disturbances. The performance of the system was first simulated using MATLAB/Simulink, and the possibility of an adaptive digital low-pass filter eliminating current harmonics was confirmed in steady and transient states. Subsequently, a digital signal processor was used to implement an active power filter. The experimental results indicate, that for the rated operation of 2 kVA, the system has a total harmonic distortion of current less than 5.0% and a power factor of 1.0 on the utility side. Thus, the transient performance of the adaptive filter is superior to the traditional digital low-pass filter and is more economical because of its short computation time compared with other types of adaptive filters. PMID:26451391
Chen, Ming-Hung
2015-01-01
This paper proposes a new adaptive filter for wind generators that combines instantaneous reactive power compensation technology and current prediction controller, and therefore this system is characterized by low harmonic distortion, high power factor, and small DC-link voltage variations during load disturbances. The performance of the system was first simulated using MATLAB/Simulink, and the possibility of an adaptive digital low-pass filter eliminating current harmonics was confirmed in steady and transient states. Subsequently, a digital signal processor was used to implement an active power filter. The experimental results indicate, that for the rated operation of 2 kVA, the system has a total harmonic distortion of current less than 5.0% and a power factor of 1.0 on the utility side. Thus, the transient performance of the adaptive filter is superior to the traditional digital low-pass filter and is more economical because of its short computation time compared with other types of adaptive filters.
Use of a wireless local area network in an orthodontic clinic.
Mupparapu, Muralidhar; Binder, Robert E; Cummins, John M
2005-06-01
Radiographic images and other patient records, including medical histories, demographics, and health insurance information, can now be stored digitally and accessed via patient management programs. However, digital image acquisition and diagnosis and treatment planning are independent tasks, and each is time consuming, especially when performed at different computer workstations. Networking or linking the computers in an office enhances access to imaging and treatment planning tools. Access can be further enhanced if the entire network is wireless. Thanks to wireless technology, stand-alone, desk-bound personal computers have been replaced with mobile, hand-held devices that can communicate with each other and the rest of the world via the Internet. As with any emerging technology, some issues should be kept in mind when adapting to the wireless environment. Foremost is network security. Second is the choice of mobile hardware devices that are used by the orthodontist, office staff, and patients. This article details the standards and choices in wireless technology that can be implemented in an orthodontic clinic and suggests how to select suitable mobile hardware for accessing or adding data to a preexisting network. The network security protocols discussed comply with HIPAA regulations and boost the efficiency of a modern orthodontic clinic.
Further evaluation of the constrained least squares electromagnetic compensation method
NASA Technical Reports Server (NTRS)
Smith, William T.
1991-01-01
Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.
Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon
2017-01-01
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871
Adaptive voting computer system
NASA Technical Reports Server (NTRS)
Koczela, L. J.; Wilgus, D. S. (Inventor)
1974-01-01
A computer system is reported that uses adaptive voting to tolerate failures and operates in a fail-operational, fail-safe manner. Each of four computers is individually connected to one of four external input/output (I/O) busses which interface with external subsystems. Each computer is connected to receive input data and commands from the other three computers and to furnish output data commands to the other three computers. An adaptive control apparatus including a voter-comparator-switch (VCS) is provided for each computer to receive signals from each of the computers and permits adaptive voting among the computers to permit the fail-operational, fail-safe operation.
Framework for a clinical information system.
Van De Velde, R; Lansiers, R; Antonissen, G
2002-01-01
The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
Envisioning future cognitive telerehabilitation technologies: a co-design process with clinicians.
How, Tuck-Voon; Hwang, Amy S; Green, Robin E A; Mihailidis, Alex
2017-04-01
Purpose Cognitive telerehabilitation is the concept of delivering cognitive assessment, feedback, or therapeutic intervention at a distance through technology. With the increase of mobile devices, wearable sensors, and novel human-computer interfaces, new possibilities are emerging to expand the cognitive telerehabilitation paradigm. This research aims to: (1) explore design opportunities and considerations when applying emergent pervasive computing technologies to cognitive telerehabilitation and (2) develop a generative co-design process for use with rehabilitation clinicians. Methods We conducted a custom co-design process that used design cards, probes, and design sessions with traumatic brain injury (TBI) clinicians. All field notes and transcripts were analyzed qualitatively. Results Potential opportunities for TBI cognitive telerehabilitation exist in the areas of communication competency, executive functioning, emotional regulation, energy management, assessment, and skill training. Designers of TBI cognitive telerehabilitation technologies should consider how technologies are adapted to a patient's physical/cognitive/emotional state, their changing rehabilitation trajectory, and their surrounding life context (e.g. social considerations). Clinicians were receptive to our co-design approach. Conclusion Pervasive computing offers new opportunities for life-situated cognitive telerehabilitation. Convivial design methods, such as this co-design process, are a helpful way to explore new design opportunities and an important space for further methodological development. Implications for Rehabilitation Designers of rehabilitation technologies should consider how to extend current design methods in order to facilitate the creative contribution of rehabilitation stakeholders. This co-design approach enables a fuller participation from rehabilitation clinicians at the front-end of design. Pervasive computing has the potential to: extend the duration and intensity of cognitive telerehabilitation training (including the delivery of 'booster' sessions or maintenance therapies); provide assessment and treatment in the context of a traumatic brain injury (TBI) patient's everyday life (thereby enhancing generalization); and permit time-sensitive interventions. Long-term use of pervasive computing for TBI cognitive telerehabilitation should take into account a patient's changing recovery trajectory, their meaningful goals, and their journey from loss to redefinition.
A new application for food customization with additive manufacturing technologies
NASA Astrophysics Data System (ADS)
Serenó, L.; Vallicrosa, G.; Delgado, J.; Ciurana, J.
2012-04-01
Additive Manufacturing (AM) technologies have emerged as a freeform approach capable of producing almost any complete three dimensional (3D) objects from computer-aided design (CAD) data by successively adding material layer by layer. Despite the broad range of possibilities, commercial AM technologies remain complex and expensive, making them suitable only for niche applications. The developments of the Fab@Home system as an open AM technology discovered a new range of possibilities of processing different materials such as edible products. The main objective of this work is to analyze and optimize the manufacturing capacity of this system when producing 3D edible objects. A new heated syringe deposition tool was developed and several process parameters were optimized to adapt this technology to consumers' needs. The results revealed in this study show the potential of this system to produce customized edible objects without qualified personnel knowledge, therefore saving manufacturing costs compared to traditional technologies.
ERIC Educational Resources Information Center
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng
2016-01-01
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2
NASA Technical Reports Server (NTRS)
Lea, Robert N. (Editor); Villarreal, James A. (Editor)
1991-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.
The Military Technology of the Polish People’s Army, 30 Years of Development,
1983-12-20
armed with light self - propelled cannon mounted on tracked vehicles (adapted also to transport aircraft) as well as recoilless guns with...with radar-computers, mounted on self propelled tracked vehicles . The modernization of artillery has led to the expansion of technical-repair...light tracked vehicles and on self propelled areal vehicles . 4 . , A very important factor influencing the improvement of the effectiveness of artillery
NASA Technical Reports Server (NTRS)
Matijevic, Jacob R.; Zimmerman, Wayne F.; Dolinsky, Shlomo
1990-01-01
Assembly of electromechanical and electronic equipment (including computers) constitutes test bed for development of advanced robotic systems for remote manipulation. Combines features not found in commercial systems. Its architecture allows easy growth in complexity and level of automation. System national resource for validation of new telerobotic technology. Intended primarily for robots used in outer space, test bed adapted to development of advanced terrestrial telerobotic systems for handling radioactive materials, dangerous chemicals, and explosives.
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Shih, Ching-Tien; Wu, Hsiao-Ling
2010-01-01
The latest research adopted software technology to redesign the mouse driver, and turned a mouse into a useful pointing assistive device for people with multiple disabilities who cannot easily or possibly use a standard mouse, to improve their pointing performance through a new operation method, Extended Dynamic Pointing Assistive Program (EDPAP),…
Application of Optical Disc Databases and Related Technology to Public Access Settings
1992-03-01
users to download and retain data. A Video Graphics Adapter (VGA) monitor was included. No printer was provided. 2. CD-ROM Product Computer Select, a...download facilities, without printer support, satisfy user needs? 38 A secondary, but significant, objective was avoidance of unnecessary Reader...design of User Log sheets and mitigated against attachment of a printer to the workstation. F. DATA COLLECTION This section describes the methodology
Weaves as an Interconnection Fabric for ASIM's and Nanosatellites
NASA Technical Reports Server (NTRS)
Gorlick, Michael M.
1995-01-01
Many of the micromachines under consideration require computer support, indeed, one of the appeals of this technology is the ability to intermix mechanical, optical, analog, and digital devices on the same substrate. The amount of computer power is rarely an issue, the sticking point is the complexity of the software required to make effective use of these devices. Micromachines are the nano-technologist's equivalent of 'golden screws'. In other words, they will be piece parts in larger assemblages. For example, a nano-satellite may be composed of stacked silicon wafers where each wafer contains hundreds to thousands of micromachines, digital controllers, general purpose computers, memories, and high-speed bus interconnects. Comparatively few of these devices will be custom designed, most will be stock parts selected from libraries and catalogs. The novelty will lie in the interconnections. For example, a digital accelerometer may be a component part in an adaptive suspension, a monitoring element embedded in the wrapper of a package, or a portion of the smart skin of a launch vehicle. In each case, this device must inter-operate with other devices and probes for the purposes of command, control, and communication. We propose a software technology called 'weaves' that will permit large collections of micromachines and their attendant computers to freely intercommunicate while preserving modularity, transparency, and flexibility. Weaves are composed of networks of communicating software components. The network, and the components comprising it, may be changed even while the software, and the devices it controls, are executing. This unusual degree of software plasticity permits micromachines to dynamically adapt the software to changing conditions and allows system engineers to rapidly and inexpensively develop special purpose software by assembling stock software components in custom configurations.
Advanced adaptive optics technology development
NASA Astrophysics Data System (ADS)
Olivier, Scot S.
2002-02-01
The NSF Center for Adaptive Optics (CfAO) is supporting research on advanced adaptive optics technologies. CfAO research activities include development and characterization of micro-electro-mechanical systems (MEMS) deformable mirror (DM) technology, as well as development and characterization of high-resolution adaptive optics systems using liquid crystal (LC) spatial light modulator (SLM) technology. This paper presents an overview of the CfAO advanced adaptive optics technology development activities including current status and future plans.
Reeves, Rustin E; Aschenbrenner, John E; Wordinger, Robert J; Roque, Rouel S; Sheedlo, Harold J
2004-05-01
The need to increase the efficiency of dissection in the gross anatomy laboratory has been the driving force behind the technologic changes we have recently implemented. With the introduction of an integrated systems-based medical curriculum and a reduction in laboratory teaching hours, anatomy faculty at the University of North Texas Health Science Center (UNTHSC) developed a computer-based dissection manual to adjust to these curricular changes and time constraints. At each cadaver workstation, Apple iMac computers were added and a new dissection manual, running in a browser-based format, was installed. Within the text of the manual, anatomical structures required for dissection were linked to digital images from prosected materials; in addition, for each body system, the dissection manual included images from cross sections, radiographs, CT scans, and histology. Although we have placed a high priority on computerization of the anatomy laboratory, we remain strong advocates of the importance of cadaver dissection. It is our belief that the utilization of computers for dissection is a natural evolution of technology and fosters creative teaching strategies adapted for anatomy laboratories in the 21st century. Our strategy has significantly enhanced the independence and proficiency of our students, the efficiency of their dissection time, and the quality of laboratory instruction by the faculty. Copyright 2004 Wiley-Liss, Inc.
Computer assessment of interview data using latent semantic analysis.
Dam, Gregory; Kaufmann, Stefan
2008-02-01
Clinical interviews are a powerful method for assessing students' knowledge and conceptualdevelopment. However, the analysis of the resulting data is time-consuming and can create a "bottleneck" in large-scale studies. This article demonstrates the utility of computational methods in supporting such an analysis. Thirty-four 7th-grade student explanations of the causes of Earth's seasons were assessed using latent semantic analysis (LSA). Analyses were performed on transcriptions of student responses during interviews administered, prior to (n = 21) and after (n = 13) receiving earth science instruction. An instrument that uses LSA technology was developed to identify misconceptions and assess conceptual change in students' thinking. Its accuracy, as determined by comparing its classifications to the independent coding performed by four human raters, reached 90%. Techniques for adapting LSA technology to support the analysis of interview data, as well as some limitations, are discussed.
Hira, A Y; Nebel de Mello, A; Faria, R A; Odone Filho, V; Lopes, R D; Zuffo, M K
2006-01-01
This article discusses a telemedicine model for emerging countries, through the description of ONCONET, a telemedicine initiative applied to pediatric oncology in Brazil. The ONCONET core technology is a Web-based system that offers health information and other services specialized in childhood cancer such as electronic medical records and cooperative protocols for complex treatments. All Web-based services are supported by the use of high performance computing infrastructure based on clusters of commodity computers. The system was fully implemented on an open-source and free-software approach. Aspects of modeling, implementation and integration are covered. A model, both technologically and economically viable, was created through the research and development of in-house solutions adapted to the emerging countries reality and with focus on scalability both in the total number of patients and in the national infrastructure.
NASA Technical Reports Server (NTRS)
Voecks, G. E.
1983-01-01
Insufficient theoretical definition of heterogeneous catalysts is the major difficulty confronting industrial suppliers who seek catalyst systems which are more active, selective, and stable than those currently available. In contrast, progress was made in tailoring homogeneous catalysts to specific reactions because more is known about the reaction intermediates promoted and/or stabilized by these catalysts during the course of reaction. However, modeling heterogeneous catalysts on a microscopic scale requires compiling and verifying complex information on reaction intermediates and pathways. This can be achieved by adapting homogeneous catalyzed reaction intermediate species, applying theoretical quantum chemistry and computer technology, and developing a better understanding of heterogeneous catalyst system environments. Research in microscopic reaction modeling is now at a stage where computer modeling, supported by physical experimental verification, could provide information about the dynamics of the reactions that will lead to designing supported catalysts with improved selectivity and stability.
Yarosh, Svetlana; Schueller, Stephen Matthew
2017-01-17
Positive psychological interventions for children have typically focused on direct adaptations of interventions developed for adults. As the community moves toward designing positive computing technologies to support child well-being, it is important to use a more participatory process that directly engages children's voices. Our objectives were, through a participatory design study, to understand children's interpretations of positive psychology concepts, as well as their perspectives on technologies that are best suited to enhance their engagement with practice of well-being skills. We addressed these questions through a content analysis of 434 design ideas, 51 sketches, and 8 prototype and videos, which emerged from a 14-session cooperative inquiry study with 12 child "happiness inventors." The study was part of a summer learning camp held at the children's middle school, which focused on teaching the invention process, teaching well-being skills drawn from positive psychology and related areas (gratitude, mindfulness, and problem solving), and iterating design ideas for technologies to support these skills. The children's ideas and prototypes revealed specific facets of how they interpreted gratitude (as thanking, being positive, and doing good things), mindfulness (as externally representing thought and emotions, controlling those thoughts and emotions, getting through unpleasant things, and avoiding forgetting something), and problem solving (as preventing bad decisions, seeking alternative solutions, and not dwelling on unproductive thoughts). This process also revealed that children emphasized particular technologies in their solutions. While desktop or laptop solutions were notably lacking, other ideas were roughly evenly distributed between mobile apps and embodied computing technologies (toys, wearables, etc). We also report on desired functionalities and approaches to engagement in the children's ideas, such as a notable emphasis on representing and responding to internal states. Our findings point to promising directions for the design of positive computing technologies targeted at children, with particular emphases on the perspectives, technologies, engagement approaches, and functionalities that appealed to the children in our study. The dual focus of the study on teaching skills while designing technologies is a novel methodology in the design of positive computing technologies intended to increase child well-being. ©Svetlana Yarosh, Stephen Matthew Schueller. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.01.2017.
Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
NASA Astrophysics Data System (ADS)
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
Carrillo, Snaider; Harkin, Jim; McDaid, Liam; Pande, Sandeep; Cawley, Seamus; McGinley, Brian; Morgan, Fearghal
2012-09-01
The brain is highly efficient in how it processes information and tolerates faults. Arguably, the basic processing units are neurons and synapses that are interconnected in a complex pattern. Computer scientists and engineers aim to harness this efficiency and build artificial neural systems that can emulate the key information processing principles of the brain. However, existing approaches cannot provide the dense interconnect for the billions of neurons and synapses that are required. Recently a reconfigurable and biologically inspired paradigm based on network-on-chip (NoC) and spiking neural networks (SNNs) has been proposed as a new method of realising an efficient, robust computing platform. However, the use of the NoC as an interconnection fabric for large-scale SNNs demands a good trade-off between scalability, throughput, neuron/synapse ratio and power consumption. This paper presents a novel traffic-aware, adaptive NoC router, which forms part of a proposed embedded mixed-signal SNN architecture called EMBRACE (EMulating Biologically-inspiRed ArChitectures in hardwarE). The proposed adaptive NoC router provides the inter-neuron connectivity for EMBRACE, maintaining router communication and avoiding dropped router packets by adapting to router traffic congestion. Results are presented on throughput, power and area performance analysis of the adaptive router using a 90 nm CMOS technology which outperforms existing NoCs in this domain. The adaptive behaviour of the router is also verified on a Stratix II FPGA implementation of a 4 × 2 router array with real-time traffic congestion. The presented results demonstrate the feasibility of using the proposed adaptive NoC router within the EMBRACE architecture to realise large-scale SNNs on embedded hardware. Copyright © 2012 Elsevier Ltd. All rights reserved.
González-Nilo, Fernando; Pérez-Acle, Tomás; Guínez-Molinos, Sergio; Geraldo, Daniela A; Sandoval, Claudia; Yévenes, Alejandro; Santos, Leonardo S; Laurie, V Felipe; Mendoza, Hegaly; Cachau, Raúl E
2011-01-01
After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.
Developing Personal Learning Environments Based on Calm Technologies
NASA Astrophysics Data System (ADS)
Fiaidhi, Jinan
Educational technology is constantly evolving and growing, and it is inevitable that this progression will continually offer new and interesting advances in our world. The instigation of calm technologies for the delivery of education is another new approach now emerging. Calm technology aims to reduce the "excitement" of information overload by letting the learner select what information is at the center of their attention and what information need to be at the peripheral. In this paper we report on the adaptation of calm technologies in an educational setting with emphasis on the needs to cater the preferences of the individual learner to respond to the challenge of providing truly learner-centered, accessible, personalized and flexible learning. Central to calm computing vision is the notion of representing learning objects as widgets, harvesting widgets from the periphery based on semantic wikis as well as widgets garbage collection from the virtual/central learning memory.
[Impact of digital technology on clinical practices: perspectives from surgery].
Zhang, Y; Liu, X J
2016-04-09
Digital medical technologies or computer aided medical procedures, refer to imaging, 3D reconstruction, virtual design, 3D printing, navigation guided surgery and robotic assisted surgery techniques. These techniques are integrated into conventional surgical procedures to create new clinical protocols that are known as "digital surgical techniques". Conventional health care is characterized by subjective experiences, while digital medical technologies bring quantifiable information, transferable data, repeatable methods and predictable outcomes into clinical practices. Being integrated into clinical practice, digital techniques facilitate surgical care by improving outcomes and reducing risks. Digital techniques are becoming increasingly popular in trauma surgery, orthopedics, neurosurgery, plastic and reconstructive surgery, imaging and anatomic sciences. Robotic assisted surgery is also evolving and being applied in general surgery, cardiovascular surgery and orthopedic surgery. Rapid development of digital medical technologies is changing healthcare and clinical practices. It is therefore important for all clinicians to purposefully adapt to these technologies and improve their clinical outcomes.
Technology for communicational development and learning in psychomotor disability
NASA Astrophysics Data System (ADS)
Trento, I.; Santucci, M.; Tula, S.; González, E.
2007-11-01
The applied investigation and experimental development project described in this paper has been carried out by Grupo Ingeniería Clínica of the Universidad Tecnológica Nacional together with two Special Education Schools dependent on the Ministry of Education of Córdoba Province. Its aim is the development of computer access assistive tools for students with mobility limitations, and with or without intellectual problems that need adaptations to access to a computer in order to learn, communicate, work, etc. On the other hand, it demonstrates the benefits that the use of a computer gives to these students. The evaluation of their performance was made trough Dr. Marianne Frostig's Developmental Test of Visual Perception and reading and writing graphic tests, comparing the results of the tests made on paper with those made on computer. Thus, an interdisciplinary team was formed by Engineering, Psychology and Special Education professionals, and 40 students were evaluated. The design of the mouse and keyboard had some adaptations. At present, the rating test stage is being achieved, and the preliminary results allow us to anticipate that pupils with psychomotor disabilities may manifest their perceptual ripeness and reach education in a more efficient way through the use of informatics tools according to their needs and possibilities.
Influence of technological factors on characteristics of hybrid fluid-film bearings
NASA Astrophysics Data System (ADS)
Koltsov, A.; Prosekova, A.; Rodichev, A.; Savin, L.
2017-08-01
The influence of the parameters of micro- and macrounevenness on the characteristics of a hybrid bearing with slotted throttling is considered in the present paper. The quantitative assumptions of calculation of pressure distribution, load capacity, lubricant flow rate and power loss due to friction in a radial hybrid bearing with slotted throttling are taken into account, considering the shape, dimensions and roughness of the support surfaces inaccuracies. Numerical simulation of processes in the lubricating layer is based on the finite-difference solution of the Reynolds equation using an uneven orthogonal computational grid with adaptive condensation. The results of computational and physical experiments are presented.
Quantification of Uncertainty in Extreme Scale Computations (QUEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghanem, Roger
QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced modelsmore » to be used in estimation and inference.« less
Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming
NASA Astrophysics Data System (ADS)
Fisher, Ward
2014-05-01
Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming frameworks and how a developer might prepare their software for application streaming. We will also examine the secondary benefits realized by moving legacy software to the cloud. Finally, we will examine the process by which a legacy Java application, the Integrated Data Viewer (IDV), is to be adapted for tablet computing via Application Streaming.
2006-10-01
NCAPS ) Christina M. Underhill, Ph.D. Approved for public release; distribution is unlimited. NPRST-TN-06-9 October 2006...Investigation of Item-Pair Presentation and Construct Validity of the Navy Computer Adaptive Personality Scales ( NCAPS ) Christina M. Underhill, Ph.D...documents one of the steps in our development of the Navy Computer Adaptive Personality Scales ( NCAPS ). NCAPS is a computer adaptive personality measure
Dental technology services and industry trends in New Zealand from 2010 to 2012.
Alameri, S S; Aarts, J M; Smith, M; Waddell, J N
2014-06-01
To provide a snapshot of the New Zealand dental technology industry and influencing factors. Developing an understanding of the commercial dental laboratory environment in New Zealand can provide insight into the entire dental industry. A web-based survey was the primary method for data collection, with separate questionnaires used for dental laboratory owners and dental technician employees. The mean net income for dental laboratory owners in New Zealand was similar to that of the United Kingdom, at $40.50 per hour. Clinical dental technicians are the highest paid employees, with a mean of $33.49 per hour. The mean technical charge for complete dentures was $632.59; including clinical services, it was $1907.00. The mean charge for a porcelain-fused-to-metal (PFM) crown was $290.27. Dental laboratory owners expressed fear about the possibility of losing dental clients to overseas laboratories due to the availability and cheap charge of offshore work. Only 25.4% of dental laboratories surveyed had computer-aided design (CAD) facilities, and even fewer (7.9%) had computer-aided manufacturing (CAM) systems. Clinical dental technology appears to be prospering. The dental technology industry appears to be adapting and remains viable, despite facing many challenges.
A Framework for Matching User Needs to an Optimal Level of Office Automation
1988-06-01
TECHNOSTRESS Craig Brod coins the term " technostress " to describe the emotional stress induced by the introduction of new technology. (Brod, 1984, pp. 28... Technostress has a very negative effect on the productivity of people who use OA systems. Common indicators of technostress are very slow learning... technostress using a strategy which divides adaptation to computers into three phases called orientation, operations and mastery. 59 1. Orientation The
2017-02-17
Psychology. Brooke, J. (1996). SUS: a ‘quick and dirty ’ usability scale. In P. Jordan, B. Thomas, I. McClelland, & B. Weerdmeester (Eds.), Usability...level modeling, International Journal of Human Computer Studies, Vol. 45(3). Menzies, T. (1996b). On the Practicality of Abductive Validation, ECAI...1). Shima, T., & Rasmussen, S. (2009). UAV Cooperative Decision and Control: Challenges and Practical Approaches, SIAM Publications, ISBN
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Anglicisms in the Romanian business and technology vocabulary
NASA Astrophysics Data System (ADS)
Todea, L.; Demarcsek, R.
2016-08-01
Multinational companies in Romania have imposed the use of the predominant language, in most cases - English, in professional communication. In contexts related to workplace communication, the main motivation for foreign borrowings is the need to denote concepts and activities. The article focuses on the English language as a wide source for a great number of innovations both at the lexical and the morphological level in the Romanian vocabulary related to business and technology. The aim of the paper is to demonstrate that Romanian language displays a natural disposition towards adopting and adapting foreign words, especially borrowed English terms, in the field of computer science and business without endangering its identity.
Advanced sensors and instrumentation
NASA Technical Reports Server (NTRS)
Calloway, Raymond S.; Zimmerman, Joe E.; Douglas, Kevin R.; Morrison, Rusty
1990-01-01
NASA is currently investigating the readiness of Advanced Sensors and Instrumentation to meet the requirements of new initiatives in space. The following technical objectives and technologies are briefly discussed: smart and nonintrusive sensors; onboard signal and data processing; high capacity and rate adaptive data acquisition systems; onboard computing; high capacity and rate onboard storage; efficient onboard data distribution; high capacity telemetry; ground and flight test support instrumentation; power distribution; and workstations, video/lighting. The requirements for high fidelity data (accuracy, frequency, quantity, spatial resolution) in hostile environments will continue to push the technology developers and users to extend the performance of their products and to develop new generations.
HERA: A New Platform for Embedding Agents in Heterogeneous Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Alonso, Ricardo S.; de Paz, Juan F.; García, Óscar; Gil, Óscar; González, Angélica
Ambient Intelligence (AmI) based systems require the development of innovative solutions that integrate distributed intelligent systems with context-aware technologies. In this sense, Multi-Agent Systems (MAS) and Wireless Sensor Networks (WSN) are two key technologies for developing distributed systems based on AmI scenarios. This paper presents the new HERA (Hardware-Embedded Reactive Agents) platform, that allows using dynamic and self-adaptable heterogeneous WSNs on which agents are directly embedded on the wireless nodes This approach facilitates the inclusion of context-aware capabilities in AmI systems to gather data from their surrounding environments, achieving a higher level of ubiquitous and pervasive computing.
Greening, S E; Grohs, D H; Guidos, B J
1997-01-01
Providing effective training, retraining and evaluation programs, including proficiency testing programs, for cytoprofessionals is a challenge shared by many academic and clinical educators internationally. In cytopathology the quality of training has immediately transferable and critically important impacts on satisfactory performance in the clinical setting. Well-designed interactive computer-assisted instruction and testing programs have been shown to enhance initial learning and to reinforce factual and conceptual knowledge. Computer systems designed not only to promote diagnostic accuracy but to integrate and streamline work flow in clinical service settings are candidates for educational adaptation. The AcCell 2000 system, designed as a diagnostic screening support system, offers technology that is adaptable to educational needs during basic and in-service training as well as testing of screening proficiency in both locator and identification skills. We describe the considerations, approaches and applications of the AcCell 2000 system in education programs for both training and evaluation of gynecologic diagnostic screening proficiency.
A neural net based architecture for the segmentation of mixed gray-level and binary pictures
NASA Technical Reports Server (NTRS)
Tabatabai, Ali; Troudet, Terry P.
1991-01-01
A neural-net-based architecture is proposed to perform segmentation in real time for mixed gray-level and binary pictures. In this approach, the composite picture is divided into 16 x 16 pixel blocks, which are identified as character blocks or image blocks on the basis of a dichotomy measure computed by an adaptive 16 x 16 neural net. For compression purposes, each image block is further divided into 4 x 4 subblocks; a one-bit nonparametric quantizer is used to encode 16 x 16 character and 4 x 4 image blocks; and the binary map and quantizer levels are obtained through a neural net segmentor over each block. The efficiency of the neural segmentation in terms of computational speed, data compression, and quality of the compressed picture is demonstrated. The effect of weight quantization is also discussed. VLSI implementations of such adaptive neural nets in CMOS technology are described and simulated in real time for a maximum block size of 256 pixels.
Varela, Gervasio; Paz-Lopez, Alejandro; Becerra, Jose A.; Duro, Richard
2016-01-01
This paper addresses the problem of developing user interfaces for Ubiquitous Computing (UC) and Ambient Intelligence (AmI) systems. These kind of systems are expected to provide a natural user experience, considering interaction modalities adapted to the user abilities and preferences and using whatever interaction devices are present in the environment. These interaction devices are not necessarily known at design time. The task is quite complicated due to the variety of devices and technologies, and the diversity of scenarios, and it usually burdens the developer with the need to create many different UIs in order to consider the foreseeable user-environment combinations. Here, we propose an UI abstraction framework for UC and AmI systems that effectively improves the portability of those systems between different environments and for different users. It allows developers to design and implement a single UI capable of being deployed with different devices and modalities regardless the physical location. PMID:27399711
Adapting bioinformatics curricula for big data.
Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H
2016-01-01
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.
Adapting bioinformatics curricula for big data
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.
2016-01-01
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
HapHop-Physio: a computer game to support cognitive therapies in children.
Rico-Olarte, Carolina; López, Diego M; Narváez, Santiago; Farinango, Charic D; Pharow, Peter S
2017-01-01
Care and support of children with physical or mental disabilities are accompanied with serious concerns for parents, families, healthcare institutions, schools, and their communities. Recent studies and technological innovations have demonstrated the feasibility of providing therapy and rehabilitation services to children supported by computer games. The aim of this paper is to present HapHop-Physio, an innovative computer game that combines exercise with fun and learning, developed to support cognitive therapies in children. Conventional software engineering methods such as the Scrum methodology, a functionality test and a related usability test, were part of the comprehensive methodology adapted to develop HapHop-Physio. The game supports visual and auditory attention therapies, as well as visual and auditory memory activities. The game was developed by a multidisciplinary team, which was based on the Hopscotch ® platform provided by Fraunhofer Institute for Digital Media Technology IDMT Institute in Germany, and designed in collaboration with a rehabilitation clinic in Colombia. HapHop-Physio was tested and evaluated to probe its functionality and user satisfaction. The results show the development of an easy-to-use and funny game by a multidisciplinary team using state-of-the-art videogame technologies and software methodologies. Children testing the game concluded that they would like to play again while undergoing rehabilitation therapies.
Adams, Audrey; Timmins, Fiona
2006-01-01
This paper describes students' experiences of a Web-based innovation at one university. This paper reports on the first phase of this development where two Web-based modules were developed. Using a survey approach (n=44) students' access to and use of computer technology were explored. Findings revealed that students' prior use of computers and Internet technologies was higher than previously reported, although use of databases was low. Skills in this area increased during the programme, with a significant rise in database, email, search engine and word processing use. Many specific computer skills were learned during the programme, with high numbers reporting ability to deal adequately with files and folders. Overall, the experience was a positive one for students. While a sense of student isolation was not reported, as many students kept in touch by phone and class attendance continued, some individual students did appear to isolate themselves. This teaching methodology has much to offer in the provision of convenient easy to access programmes that can be easily adapted to the individual lifestyle. However, student support mechanisms need careful consideration for students who are at risk of becoming isolated. Staff also need to supported in the provision of this methodology and face-to-face contact with teachers for some part of the programme is preferable.
Multiconjugate adaptive optics applied to an anatomically accurate human eye model.
Bedggood, P A; Ashman, R; Smith, G; Metha, A B
2006-09-04
Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1993-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on 6 June 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. A flexible scientific staff is provided through a university faculty visitor program, a post doctoral program, and a student visitor program. Not only does this provide appropriate expertise but it also introduces scientists outside of NASA to NASA problems. A small group of core RIACS staff provides continuity and interacts with an ARC technical monitor and scientific advisory group to determine the RIACS mission. RIACS activities are reviewed and monitored by a USRA advisory council and ARC technical monitor. Research at RIACS is currently being done in the following areas: Parallel Computing, Advanced Methods for Scientific Computing, High Performance Networks and Technology, and Learning Systems. Parallel compiler techniques, adaptive numerical methods for flows in complicated geometries, and optimization were identified as important problems to investigate for ARC's involvement in the Computational Grand Challenges of the next decade.
Gay and bisexual men's use of the Internet: research from the 1990s through 2013.
Grov, Christian; Breslow, Aaron S; Newcomb, Michael E; Rosenberger, Joshua G; Bauermeister, Jose A
2014-01-01
We document the historical and cultural shifts in how gay and bisexual men have used the Internet for sexuality between the 1990s and 2013-including shifting technology as well as research methods to study gay and bisexual men online. Gay and bisexual men have rapidly taken to using the Internet for sexual purposes: for health information seeking, finding sex partners, dating, cybersex, and pornography. Men have adapted to the ever-evolving technological advances that have been made in connecting users to the Internet-from logging on via dial-up modem on a desktop computer to geo-social-sexual networking via handheld devices. In kind, researchers have adapted to the Internet to study gay and bisexual men. Studies have carefully considered the ethics, feasibility, and acceptability of using the Internet to conduct research and interventions. Much of this work has been grounded in models of disease prevention, largely as a result of the ongoing HIV/AIDS epidemic. The need to reduce HIV in this population has been a driving force to develop innovative research and Internet-based intervention methodologies. The Internet, and specifically mobile technology, is an environment gay and bisexual men are using for sexual purposes. These innovative technologies represent powerful resources for researchers to study and provide outreach.
[The operating room of the future].
Broeders, I A; Niessen, W; van der Werken, C; van Vroonhoven, T J
2000-01-29
Advances in computer technology will revolutionize surgical techniques in the next decade. The operating room (OR) of the future will be connected with a laboratory where clinical specialists and researchers prepare image-guided interventions and explore the possibilities of these techniques. The virtual reality is linked to the actual situation in the OR with the aid of navigation instruments. During complicated operations the images prepared preoperatively will be corrected during the operation on the basis of the information obtained peroperatively. MRI currently offers maximal possibilities for image-guided surgery of soft tissues. Simpler techniques such as fluoroscopy and echography will become increasingly integrated in computer-assisted peroperative navigation. The development of medical robot systems will make possible microsurgical procedures by the endoscopic route. Tele-manipulation systems will also play a part in the training of surgeons. Design and construction of the OR will be adapted to the surgical technology, and include an information and control unit where preoperative and peroperative data come together and from where the surgeon operates the instruments. Concepts for the future OR should be regularly adjusted to allow for new surgical technology.
Quality based approach for adaptive face recognition
NASA Astrophysics Data System (ADS)
Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.
2009-05-01
Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
Revision and Expansion of Navy Computer Adaptive Personality Scales (NCAPS)
2007-08-01
Nav Pesne Reerh Stde, an Technolg y Dii sio Revision and Expansion of Navy Computer Adaptive Personality Scales ( NCAPS ) Robert J. Schneider, Ph.D...TN-o7-12 August 2007 Revision and Expansion of Navy Computer Adaptive Personality Scales ( NCAPS ) Robert J. Schneider, Ph.D. Kerri L. Ferstl, Ph.D...03/31/2006 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Revision and Expansion of Navy Computer Adaptive Personality Scales ( NCAPS ) 5b. GRANT NUMBER 5c
2006-10-01
Investigation of Item-Pair Presentation and Construct Validity of the Navy Computer Adaptive Personality Scales ( NCAPS ) Christina M. Underhill, Ph.D...Construct Validity of the Navy Computer Adaptive Personality Scales ( NCAPS ) Christina M. Underhill, Ph.D. Reviewed and Approved by Jacqueline A. Mottern...and Construct Validity of the Navy Computer Adaptive Personality Scales ( NCAPS ) 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 0602236N and 0603236N 6
A video-based real-time adaptive vehicle-counting system for urban roads.
Liu, Fei; Zeng, Zhiyuan; Jiang, Rong
2017-01-01
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.
A video-based real-time adaptive vehicle-counting system for urban roads
2017-01-01
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984
Current Grid operation and future role of the Grid
NASA Astrophysics Data System (ADS)
Smirnova, O.
2012-12-01
Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.
Charlebois, Kathleen; Palmour, Nicole; Knoppers, Bartha Maria
2016-01-01
This study aims to understand the influence of the ethical and legal issues on cloud computing adoption in the field of genomics research. To do so, we adapted Diffusion of Innovation (DoI) theory to enable understanding of how key stakeholders manage the various ethical and legal issues they encounter when adopting cloud computing. Twenty semi-structured interviews were conducted with genomics researchers, patient advocates and cloud service providers. Thematic analysis generated five major themes: 1) Getting comfortable with cloud computing; 2) Weighing the advantages and the risks of cloud computing; 3) Reconciling cloud computing with data privacy; 4) Maintaining trust and 5) Anticipating the cloud by creating the conditions for cloud adoption. Our analysis highlights the tendency among genomics researchers to gradually adopt cloud technology. Efforts made by cloud service providers to promote cloud computing adoption are confronted by researchers’ perpetual cost and security concerns, along with a lack of familiarity with the technology. Further underlying those fears are researchers’ legal responsibility with respect to the data that is stored on the cloud. Alternative consent mechanisms aimed at increasing patients’ control over the use of their data also provide a means to circumvent various institutional and jurisdictional hurdles that restrict access by creating siloed databases. However, the risk of creating new, cloud-based silos may run counter to the goal in genomics research to increase data sharing on a global scale. PMID:27755563
Charlebois, Kathleen; Palmour, Nicole; Knoppers, Bartha Maria
2016-01-01
This study aims to understand the influence of the ethical and legal issues on cloud computing adoption in the field of genomics research. To do so, we adapted Diffusion of Innovation (DoI) theory to enable understanding of how key stakeholders manage the various ethical and legal issues they encounter when adopting cloud computing. Twenty semi-structured interviews were conducted with genomics researchers, patient advocates and cloud service providers. Thematic analysis generated five major themes: 1) Getting comfortable with cloud computing; 2) Weighing the advantages and the risks of cloud computing; 3) Reconciling cloud computing with data privacy; 4) Maintaining trust and 5) Anticipating the cloud by creating the conditions for cloud adoption. Our analysis highlights the tendency among genomics researchers to gradually adopt cloud technology. Efforts made by cloud service providers to promote cloud computing adoption are confronted by researchers' perpetual cost and security concerns, along with a lack of familiarity with the technology. Further underlying those fears are researchers' legal responsibility with respect to the data that is stored on the cloud. Alternative consent mechanisms aimed at increasing patients' control over the use of their data also provide a means to circumvent various institutional and jurisdictional hurdles that restrict access by creating siloed databases. However, the risk of creating new, cloud-based silos may run counter to the goal in genomics research to increase data sharing on a global scale.
NASA Technical Reports Server (NTRS)
Anyiwo, Joshua C.
2000-01-01
Vixen is a collection of enabling technologies for uninhibited distributed object computing. In the Spring of 1995 when Vixen was proposed, it was an innovative idea very much ahead of its time. But today the technologies proposed in Vixen have become standard technologies for Enterprise Computing. Sun Microsystems J2EE/EJB specifications, among others, are independently proposed technologies of the Vixen type. I have brought Vixen completely under the J2EE standard in order to maximize interoperability and compatibility with other computing industry efforts. Vixen and the Enterprise JavaBean (EJB) Server technologies are now practically identical; OIL, another Vixen technology, and the Java Messaging System (JMS) are practically identical; and so on. There is no longer anything novel or patentable in the Vixen work performed under this grant. The above discussion, notwithstanding, my independent development of Vixen has significantly helped me, my university, my students and the local community. The undergraduate students who worked with me in developing Vixen have enhanced their expertise in what has become the cutting edge technology of their industry and are therefore well positioned for lucrative employment opportunities in the industry. My academic department has gained a new course: "Multi-media System Development", which provides a highly desirable expertise to our students for employment in any enterprise today. The many Outreach Programs that I conducted during this grant period have exposed local Middle School students to the contributions that NASA is making in our society as well as awakened desires in many such students for careers in Science and Technology. I have applied Vixen to the development of two software packages: (a) JAS: Joshua Application Server - which allows a user to configure an EJB Server to serve a J2EE compliant application over the world wide web; (b) PCM: Professor Course Manager: a J2EE compliant application for configuring a course for distance learning. These types of applications are, however, generally available in the industry today.
DTD Creation for the Software Technology for Adaptable, Reliable Systems (STARS) Program
1990-06-23
developed to store documents in a format peculiar to the program’s design . Editing the document became easy since word processors adjust all spacing and...descriptive markup may be output to a 3 CDRL 1810 January 26, 1990 variety of devices ranging from high quality typography printers through laser printers...provision for non-SGML material, such as graphics , to be inserted in a document. For these reasons the Computer-Aided Acquisition and Logistics Support
NASA Technical Reports Server (NTRS)
Andrews, Alison E.
1987-01-01
An approach to analyzing CFD knowledge-based systems is proposed which is based, in part, on the concept of knowledge-level analysis. Consideration is given to the expert cooling fan design system, the PAN AIR knowledge system, grid adaptation, and expert zonal grid generation. These AI/CFD systems demonstrate that current AI technology can be successfully applied to well-formulated problems that are solved by means of classification or selection of preenumerated solutions.
2018-05-04
ARL-TR-8359 ● MAY 2018 US Army Research Laboratory Enhancing Human–Agent Teaming with Individualized, Adaptive Technologies : A...with Individualized, Adaptive Technologies : A Discussion of Critical Scientific Questions by Arwen H DeCostanza, Amar R Marathe, Addison Bohannon...Enhancing Human–Agent Teaming with Individualized, Adaptive Technologies : A Discussion of Critical Scientific Questions 5a. CONTRACT NUMBER 5b
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Voice-coil technology for the E-ELT M4 Adaptive Unit
NASA Astrophysics Data System (ADS)
Gallieni, D.; Tintori, M.; Mantegazza, M.; Anaclerio, E.; Crimella, L.; Acerboni, M.; Biasi, R.; Angerer, G.; Andrigettoni, M.; Merler, A.; Veronese, D.; Carel, J.-L.; Marque, G.; Molinari, E.; Tresoldi, D.; Toso, G.; Spanó, P.; Riva, M.; Mazzoleni, R.; Riccardi, A.; Mantegazza, P.; Manetti, M.; Morandini, M.; Vernet, E.; Hubin, N.; Jochum, L.; Madec, P.; Dimmler, M.; Koch, F.
We present our design of the E-ELT M4 Adaptive Unit based on voice-coil driven deformable mirror technology. This technology was developed by INAF-Arcetri, Microgate and ADS team in the past 15 years and it has been adopted by a number of large ground based telescopes as the MMT, LBT, Magellan and lastly the VLT in the frame of the Adaptive Telescope Facility project. Our design is based on contactless force actuators made by permanent magnets glued on the back of the deformable mirror and coils mounted on a stiff reference structure. We use capacitive sensors to close a position loop co-located with each actuator. Dedicated high performance parallel processors are used to implement the local de-centralized control at actuator level and a centralized feed-forward computation of all the actuators forces. This allowed achieving in our previous systems dynamic performances well in line with the requirements of the M4 Adaptive Unit (M4AU) case. The actuator density of our design is in the order of 30-mm spacing for a figure of about 6000 actuators on the M4AU and it allows fulfilling the fitting error and corrections requirements of the E-ELT high order DM. Moreover, our contact-less technology makes the Deformable Mirror tolerant to up 5% actuators failures without spoiling system capability to reach its specified performances, besides allowing large mechanical tolerances between the reference structure and the deformable mirror. Finally, we present the Demonstration Prototype we are building in the frame of the M4AU Phase B study to measure the optical dynamical performances predicted by our design. Such a prototype will be fully representative of the M4AU features, in particular it will address the controllability of two adjacent segments of the 2-mm thick mirror and implement the actuators "brick" modular concept that has been adopted to dramatically improve the maintainability of the final unit.
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
NASA Astrophysics Data System (ADS)
Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing
2015-08-01
Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).
Wireless Adaptive Therapeutic TeleGaming in a Pervasive Computing Environment
NASA Astrophysics Data System (ADS)
Peters, James F.; Szturm, Tony; Borkowski, Maciej; Lockery, Dan; Ramanna, Sheela; Shay, Barbara
This chapter introduces a wireless, pervasive computing approach to adaptive therapeutic telegaming considered in the context of near set theory. Near set theory provides a formal basis for observation, comparison and classification of perceptual granules. A perceptual granule is defined by a collection of objects that are graspable by the senses or by the mind. In the proposed pervasive computing approach to telegaming, a handicapped person (e.g., stroke patient with limited hand, finger, arm function) plays a video game by interacting with familiar instrumented objects such as cups, cutlery, soccer balls, nozzles, screw top-lids, spoons, so that the technology that makes therapeutic exercise game-playing possible is largely invisible (Archives of Physical Medicine and Rehabilitation 89:2213-2217, 2008). The basic approach to adaptive learning (AL) in the proposed telegaming environment is ethology-inspired and is quite different from the traditional approach to reinforcement learning. In biologically-inspired learning, organisms learn to achieve some goal by durable modification of behaviours in response to signals from the environment resulting from specific experiences (Animal Behavior, 1995). The term adaptive is used here in an ethological sense, where learning by an organism results from modifying behaviour in response to perceived changes in the environment. To instill adaptivity in a video game, it is assumed that learning by a video game is episodic. During an episode, the behaviour of a player is measured indirectly by tracking the occurrence of gaming events such as a hit or a miss of a target (e.g., hitting a moving ball with a game paddle). An ethogram provides a record of behaviour feature values that provide a basis a functional registry for handicapped players for gaming adaptivity. An important practical application of adaptive gaming is therapeutic rehabilitation exercise carried out in parallel with playing action video games. Enjoyable and engaging interactive gaming will motivate patients to complete the rehabilitation process. Adaptivity is seen as a way to make action games more accessible to those who have physical and cognitive impairments. The telegaming system connects to the internet and implements a feed-and-forward mechanism that transmits gaming session tables after each gaming session to a remote registry accessible to therapists and researchers. The contribution of this chapter is the introduction of a framework for wireless telegaming useful in therapeutic rehabilitation.
Star adaptation for two-algorithms used on serial computers
NASA Technical Reports Server (NTRS)
Howser, L. M.; Lambiotte, J. J., Jr.
1974-01-01
Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.
Neural networks: Application to medical imaging
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.
1994-01-01
The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.
Methods and principles for determining task dependent interface content
NASA Technical Reports Server (NTRS)
Shalin, Valerie L.; Geddes, Norman D.; Mikesell, Brian G.
1992-01-01
Computer generated information displays provide a promising technology for offsetting the increasing complexity of the National Airspace System. To realize this promise, however, we must extend and adapt the domain-dependent knowledge that informally guides the design of traditional dedicated displays. In our view, the successful exploitation of computer generated displays revolves around the idea of information management, that is, the identification, organization, and presentation of relevant and timely information in a complex task environment. The program of research that is described leads to methods and principles for information management in the domain of commercial aviation. The multi-year objective of the proposed program of research is to develop methods and principles for determining task dependent interface content.
The QuakeSim Project: Numerical Simulations for Active Tectonic Processes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry
2004-01-01
In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.
Learners' Perceptions and Illusions of Adaptivity in Computer-Based Learning Environments
ERIC Educational Resources Information Center
Vandewaetere, Mieke; Vandercruysse, Sylke; Clarebout, Geraldine
2012-01-01
Research on computer-based adaptive learning environments has shown exemplary growth. Although the mechanisms of effective adaptive instruction are unraveled systematically, little is known about the relative effect of learners' perceptions of adaptivity in adaptive learning environments. As previous research has demonstrated that the learners'…
Metcalfe, Helene; Jonas-Dwyer, Diana; Saunders, Rosemary; Dugmore, Helen
2015-10-01
The introduction of learning technologies into educational settings continues to grow alongside the emergence of innovative technologies into the healthcare arena. The challenge for health professionals such as medical, nursing, and allied health practitioners is to develop an improved understanding of these technologies and how they may influence practice and contribute to healthcare. For nurse educators to remain contemporary, there is a need to not only embrace current technologies in teaching and learning but to also ensure that students are able to adapt to this changing pedagogy. One recent technological innovation is the use of wearable computing technology, consisting of video recording with the capability of playback analysis. The authors of this article discuss the introduction of the use of wearable Point of View video glasses by a cohort of nursing students in a simulated clinical learning laboratory. Of particular interest was the ease of use of the glasses, also termed the usability of this technology, which is central to its success. Students' reflections were analyzed together with suggestions for future use.
A service based adaptive U-learning system using UX.
Jeong, Hwa-Young; Yi, Gangman
2014-01-01
In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques.
A Service Based Adaptive U-Learning System Using UX
Jeong, Hwa-Young
2014-01-01
In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques. PMID:25147832
Implementation of Multispectral Image Classification on a Remote Adaptive Computer
NASA Technical Reports Server (NTRS)
Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna
1999-01-01
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
Adaptive Technologies. Research Report. ETS RR-07-05
ERIC Educational Resources Information Center
Shute, Valerie J.; Zapata-Rivera, Diego
2007-01-01
This paper describes research and development efforts related to adaptive technologies, which can be combined with other technologies and processes to form an adaptive system. The goal of an adaptive system, in the context of this paper, is to create an instructionally sound and flexible environment that supports learning for students with a range…
Bryant, D P; Bryant, B R
1998-01-01
Cooperative learning (CL) is a common instructional arrangement that is used by classroom teachers to foster academic achievement and social acceptance of students with and without learning disabilities. Cooperative learning is appealing to classroom teachers because it can provide an opportunity for more instruction and feedback by peers than can be provided by teachers to individual students who require extra assistance. Recent studies suggest that students with LD may need adaptations during cooperative learning activities. The use of assistive technology adaptations may be necessary to help some students with LD compensate for their specific learning difficulties so that they can engage more readily in cooperative learning activities. A process for integrating technology adaptations into cooperative learning activities is discussed in terms of three components: selecting adaptations, monitoring the use of the adaptations during cooperative learning activities, and evaluating the adaptations' effectiveness. The article concludes with comments regarding barriers to and support systems for technology integration, technology and effective instructional practices, and the need to consider technology adaptations for students who have learning disabilities.
NASA Technical Reports Server (NTRS)
Campbell, Stefan F.; Kaneshige, John T.; Nguyen, Nhan T.; Krishakumar, Kalmanje S.
2010-01-01
Presented here is the evaluation of multiple adaptive control technologies for a generic transport aircraft simulation. For this study, seven model reference adaptive control (MRAC) based technologies were considered. Each technology was integrated into an identical dynamic-inversion control architecture and tuned using a methodology based on metrics and specific design requirements. Simulation tests were then performed to evaluate each technology s sensitivity to time-delay, flight condition, model uncertainty, and artificially induced cross-coupling. The resulting robustness and performance characteristics were used to identify potential strengths, weaknesses, and integration challenges of the individual adaptive control technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Li, Weixuan; Zeng, Lingzao
2016-06-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less
Holographic Adaptive Laser Optics System (HALOS): Fast, Autonomous Aberration Correction
NASA Astrophysics Data System (ADS)
Andersen, G.; MacDonald, K.; Gelsinger-Austin, P.
2013-09-01
We present an adaptive optics system which uses a multiplexed hologram to deconvolve the phase aberrations in an input beam. This wavefront characterization is extremely fast as it is based on simple measurements of the intensity of focal spots and does not require any computations. Furthermore, the system does not require a computer in the loop and is thus much cheaper, less complex and more robust as well. A fully functional, closed-loop prototype incorporating a 32-element MEMS mirror has been constructed. The unit has a footprint no larger than a laptop but runs at a bandwidth of 100kHz over an order of magnitude faster than comparable, conventional systems occupying a significantly larger volume. Additionally, since the sensing is based on parallel, all-optical processing, the speed is independent of actuator number running at the same bandwidth for one actuator as for a million. We are developing the HALOS technology with a view towards next-generation surveillance systems for extreme adaptive optics applications. These include imaging, lidar and free-space optical communications for unmanned aerial vehicles and SSA. The small volume is ideal for UAVs, while the high speed and high resolution will be of great benefit to the ground-based observation of space-based objects.
The Efficacy of Psychophysiological Measures for Implementing Adaptive Technology
NASA Technical Reports Server (NTRS)
Scerbo, Mark W.; Freeman, Frederick G.; Mikulka, Peter J.; Parasuraman, Raja; DiNocero, Francesco; Prinzel, Lawrence J., III
2001-01-01
Adaptive automation refers to technology that can change its mode of operation dynamically. Further, both the technology and the operator can initiate changes in the level or mode of automation. The present paper reviews research on adaptive technology. It is divided into three primary sections. In the first section, issues surrounding the development and implementation of adaptive automation are presented. Because physiological-based measures show much promise for implementing adaptive automation, the second section is devoted to examining candidate indices. In the final section, those techniques that show the greatest promise for adaptive automation as well as issues that still need to be resolved are discussed.
Modular, Cost-Effective, Extensible Avionics Architecture for Secure, Mobile Communications
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2006-01-01
Current onboard communication architectures are based upon an all-in-one communications management unit. This unit and associated radio systems has regularly been designed as a one-off, proprietary system. As such, it lacks flexibility and cannot adapt easily to new technology, new communication protocols, and new communication links. This paper describes the current avionics communication architecture and provides a historical perspective of the evolution of this system. A new onboard architecture is proposed that allows full use of commercial-off-the-shelf technologies to be integrated in a modular approach thereby enabling a flexible, cost-effective and fully deployable design that can take advantage of ongoing advances in the computer, cryptography, and telecommunications industries.
Modular, Cost-Effective, Extensible Avionics Architecture for Secure, Mobile Communications
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2007-01-01
Current onboard communication architectures are based upon an all-in-one communications management unit. This unit and associated radio systems has regularly been designed as a one-off, proprietary system. As such, it lacks flexibility and cannot adapt easily to new technology, new communication protocols, and new communication links. This paper describes the current avionics communication architecture and provides a historical perspective of the evolution of this system. A new onboard architecture is proposed that allows full use of commercial-off-the-shelf technologies to be integrated in a modular approach thereby enabling a flexible, cost-effective and fully deployable design that can take advantage of ongoing advances in the computer, cryptography, and telecommunications industries.
An adaptive process-based cloud infrastructure for space situational awareness applications
NASA Astrophysics Data System (ADS)
Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce
2014-06-01
Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.
ERIC Educational Resources Information Center
May, Donald M.; And Others
The minicomputer-based Computerized Diagnostic and Decision Training (CDDT) system described combines the principles of artificial intelligence, decision theory, and adaptive computer assisted instruction for training in electronic troubleshooting. The system incorporates an adaptive computer program which learns the student's diagnostic and…
ERIC Educational Resources Information Center
Martin, Andrew J.; Lazendic, Goran
2018-01-01
The present study investigated the implications of computer-adaptive testing (operationalized by way of multistage adaptive testing; MAT) and "conventional" fixed order computer testing for various test-relevant outcomes in numeracy, including achievement, test-relevant motivation and engagement, and subjective test experience. It did so…
Authoring of Adaptive Computer Assisted Assessment of Free-Text Answers
ERIC Educational Resources Information Center
Alfonseca, Enrique; Carro, Rosa M.; Freire, Manuel; Ortigosa, Alvaro; Perez, Diana; Rodriguez, Pilar
2005-01-01
Adaptation techniques can be applied not only to the multimedia contents or navigational possibilities of a course, but also to the assessment. In order to facilitate the authoring of adaptive free-text assessment and its integration within adaptive web-based courses, Adaptive Hypermedia techniques and Free-text Computer Assisted Assessment are…
Computational high-resolution optical imaging of the living human retina
NASA Astrophysics Data System (ADS)
Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.
2015-07-01
High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.
A global distributed storage architecture
NASA Technical Reports Server (NTRS)
Lionikis, Nemo M.; Shields, Michael F.
1996-01-01
NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.
NASA Astrophysics Data System (ADS)
Klopfer, Eric; Yoon, Susan; Perry, Judy
2005-09-01
This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of handheld activities with respect to enhancing motivation, engagement and self-directed learning. Three additional themes are discussed that provide insight into understanding curricular applicability of Participatory Simulations that suggest a new take on ubiquitous and accessible mobile computing. These themes generally point to the multiple layers of social and cognitive flexibility intrinsic to their design: ease of adaptation to subject-matter content knowledge and curricular integration; facility in attending to teacher-individualized goals; and encouraging the adoption of learner-centered strategies.
Eby, David W; Molnar, Lisa J; Zakrajsek, Jennifer S; Ryan, Lindsay H; Zanier, Nicole; Louis, Renée M St; Stanciu, Sergiu C; LeBlanc, David; Kostyniuk, Lidia P; Smith, Jacqui; Yung, Raymond; Nyquist, Linda; DiGuiseppi, Carolyn; Li, Guohua; Mielenz, Thelma J; Strogatz, David
2018-04-01
The purpose of the present study was to gain a better understanding of the types of in-vehicle technologies being used by older drivers as well as older drivers' use, learning, and perceptions of safety related to these technologies among a large cohort of older drivers at multiple sites in the United States. A secondary purpose was to explore the prevalence of aftermarket vehicle adaptations and how older adults go about making adaptations and how they learn to use them. The study utilized baseline questionnaire data from 2990 participants from the Longitudinal Research on Aging Drivers (LongROAD) study. Fifteen in-vehicle technologies and 12 aftermarket vehicle adaptations were investigated. Overall, 57.2% of participants had at least one advanced technology in their primary vehicle. The number of technologies in a vehicle was significantly related to being male, having a higher income, and having a higher education level. The majority of respondents learned to use these technologies on their own, with "figured-it-out-myself" being reported by 25%-75% of respondents across the technologies. Overall, technologies were always used about 43% of the time, with wide variability among the technologies. Across all technologies, nearly 70% of respondents who had these technologies believed that they made them a safer driver. With regard to vehicle adaptations, less than 9% of respondents had at least one vehicle adaptation present, with the number of adaptations per vehicle ranging from 0 to 4. A large majority did not work with a professional to make or learn about the aftermarket vehicle adaptation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Proteomics of Skeletal Muscle: Focus on Insulin Resistance and Exercise Biology
Deshmukh, Atul S.
2016-01-01
Skeletal muscle is the largest tissue in the human body and plays an important role in locomotion and whole body metabolism. It accounts for ~80% of insulin stimulated glucose disposal. Skeletal muscle insulin resistance, a primary feature of Type 2 diabetes, is caused by a decreased ability of muscle to respond to circulating insulin. Physical exercise improves insulin sensitivity and whole body metabolism and remains one of the most promising interventions for the prevention of Type 2 diabetes. Insulin resistance and exercise adaptations in skeletal muscle might be a cause, or consequence, of altered protein expressions profiles and/or their posttranslational modifications (PTMs). Mass spectrometry (MS)-based proteomics offer enormous promise for investigating the molecular mechanisms underlying skeletal muscle insulin resistance and exercise-induced adaptation; however, skeletal muscle proteomics are challenging. This review describes the technical limitations of skeletal muscle proteomics as well as emerging developments in proteomics workflow with respect to samples preparation, liquid chromatography (LC), MS and computational analysis. These technologies have not yet been fully exploited in the field of skeletal muscle proteomics. Future studies that involve state-of-the-art proteomics technology will broaden our understanding of exercise-induced adaptations as well as molecular pathogenesis of insulin resistance. This could lead to the identification of new therapeutic targets. PMID:28248217
Intelligent Vehicle Health Management
NASA Technical Reports Server (NTRS)
Paris, Deidre E.; Trevino, Luis; Watson, Michael D.
2005-01-01
As a part of the overall goal of developing Integrated Vehicle Health Management systems for aerospace vehicles, the NASA Faculty Fellowship Program (NFFP) at Marshall Space Flight Center has performed a pilot study on IVHM principals which integrates researched IVHM technologies in support of Integrated Intelligent Vehicle Management (IIVM). IVHM is the process of assessing, preserving, and restoring system functionality across flight and ground systems (NASA NGLT 2004). The framework presented in this paper integrates advanced computational techniques with sensor and communication technologies for spacecraft that can generate responses through detection, diagnosis, reasoning, and adapt to system faults in support of INM. These real-time responses allow the IIVM to modify the affected vehicle subsystem(s) prior to a catastrophic event. Furthermore, the objective of this pilot program is to develop and integrate technologies which can provide a continuous, intelligent, and adaptive health state of a vehicle and use this information to improve safety and reduce costs of operations. Recent investments in avionics, health management, and controls have been directed towards IIVM. As this concept has matured, it has become clear the INM requires the same sensors and processing capabilities as the real-time avionics functions to support diagnosis of subsystem problems. New sensors have been proposed, in addition, to augment the avionics sensors to support better system monitoring and diagnostics. As the designs have been considered, a synergy has been realized where the real-time avionics can utilize sensors proposed for diagnostics and prognostics to make better real-time decisions in response to detected failures. IIVM provides for a single system allowing modularity of functions and hardware across the vehicle. The framework that supports IIVM consists of 11 major on-board functions necessary to fully manage a space vehicle maintaining crew safety and mission objectives: Guidance and Navigation; Communications and Tracking; Vehicle Monitoring; Information Transport and Integration; Vehicle Diagnostics; Vehicle Prognostics; Vehicle mission Planning; Automated Repair and Replacement; Vehicle Control; Human Computer Interface; and Onboard Verification and Validation. Furthermore, the presented framework provides complete vehicle management which not only allows for increased crew safety and mission success through new intelligence capabilities, but also yields a mechanism for more efficient vehicle operations. The representative IVHM technologies for computer platform using heterogeneous communication, 3) coupled electromagnetic oscillators for enhanced communications, 4) Linux-based real-time systems, 5) genetic algorithms, 6) Bayesian Networks, 7) evolutionary algorithms, 8) dynamic systems control modeling, and 9) advanced sensing capabilities. This paper presents IVHM technologies developed under NASA's NFFP pilot project and the integration of these technologies forms the framework for IIVM.
NASA Astrophysics Data System (ADS)
Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2018-01-01
Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 1
NASA Technical Reports Server (NTRS)
Lea, Robert N. (Editor); Villarreal, James (Editor)
1991-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Houston, Clear Lake. The workshop was held April 11 to 13 at the Johnson Space Flight Center. Technical topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S. R.
2014-12-01
Disaster risk management has grown to rely on earth observations, multi-source data analysis, numerical modeling, and interagency information sharing. The practice and outcomes of disaster risk management will likely undergo further change as several emerging earth science technologies come of age: mobile devices; location-based services; ubiquitous sensors; drones; small satellites; satellite direct readout; Big Data analytics; cloud computing; Web services for predictive modeling, semantic reconciliation, and collaboration; and many others. Integrating these new technologies well requires developing and adapting them to meet current needs; but also rethinking current practice to draw on new capabilities to reach additional objectives. This requires a holistic view of the disaster risk management enterprise and of the analytical or operational capabilities afforded by these technologies. One helpful tool for this assessment, the GEOSS Architecture for the Use of Remote Sensing Products in Disaster Management and Risk Assessment (Evans & Moe, 2013), considers all phases of the disaster risk management lifecycle for a comprehensive set of natural hazard types, and outlines common clusters of activities and their use of information and computation resources. We are using these architectural views, together with insights from current practice, to highlight effective, interrelated roles for emerging earth science technologies in disaster risk management. These roles may be helpful in creating roadmaps for research and development investment at national and international levels.
Incorporating computational resources in a cancer research program
Woods, Nicholas T.; Jhuraney, Ankita; Monteiro, Alvaro N.A.
2015-01-01
Recent technological advances have transformed cancer genetics research. These advances have served as the basis for the generation of a number of richly annotated datasets relevant to the cancer geneticist. In addition, many of these technologies are now within reach of smaller laboratories to answer specific biological questions. Thus, one of the most pressing issues facing an experimental cancer biology research program in genetics is incorporating data from multiple sources to annotate, visualize, and analyze the system under study. Fortunately, there are several computational resources to aid in this process. However, a significant effort is required to adapt a molecular biology-based research program to take advantage of these datasets. Here, we discuss the lessons learned in our laboratory and share several recommendations to make this transition effectively. This article is not meant to be a comprehensive evaluation of all the available resources, but rather highlight those that we have incorporated into our laboratory and how to choose the most appropriate ones for your research program. PMID:25324189
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
Recent developments of artificial intelligence in drying of fresh food: A review.
Sun, Qing; Zhang, Min; Mujumdar, Arun S
2018-03-01
Intellectualization is an important direction of drying development and artificial intelligence (AI) technologies have been widely used to solve problems of nonlinear function approximation, pattern detection, data interpretation, optimization, simulation, diagnosis, control, data sorting, clustering, and noise reduction in different food drying technologies due to the advantages of self-learning ability, adaptive ability, strong fault tolerance and high degree robustness to map the nonlinear structures of arbitrarily complex and dynamic phenomena. This article presents a comprehensive review on intelligent drying technologies and their applications. The paper starts with the introduction of basic theoretical knowledge of ANN, fuzzy logic and expert system. Then, we summarize the AI application of modeling, predicting, and optimization of heat and mass transfer, thermodynamic performance parameters, and quality indicators as well as physiochemical properties of dried products in artificial biomimetic technology (electronic nose, computer vision) and different conventional drying technologies. Furthermore, opportunities and limitations of AI technique in drying are also outlined to provide more ideas for researchers in this area.
2013-04-01
Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change Kathryn Aten and John T. Dillard Naval...Defense Acquisition and the Case of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change...describes the preliminary analysis and findings of our study exploring what drives successful organizational adaptation in the context of technology
HapHop-Physio: a computer game to support cognitive therapies in children
Rico-Olarte, Carolina; López, Diego M; Narváez, Santiago; Farinango, Charic D; Pharow, Peter S
2017-01-01
Background Care and support of children with physical or mental disabilities are accompanied with serious concerns for parents, families, healthcare institutions, schools, and their communities. Recent studies and technological innovations have demonstrated the feasibility of providing therapy and rehabilitation services to children supported by computer games. Objective The aim of this paper is to present HapHop-Physio, an innovative computer game that combines exercise with fun and learning, developed to support cognitive therapies in children. Methods Conventional software engineering methods such as the Scrum methodology, a functionality test and a related usability test, were part of the comprehensive methodology adapted to develop HapHop-Physio. Results The game supports visual and auditory attention therapies, as well as visual and auditory memory activities. The game was developed by a multidisciplinary team, which was based on the Hopscotch® platform provided by Fraunhofer Institute for Digital Media Technology IDMT Institute in Germany, and designed in collaboration with a rehabilitation clinic in Colombia. HapHop-Physio was tested and evaluated to probe its functionality and user satisfaction. Conclusion The results show the development of an easy-to-use and funny game by a multidisciplinary team using state-of-the-art videogame technologies and software methodologies. Children testing the game concluded that they would like to play again while undergoing rehabilitation therapies. PMID:28740440
EMERGENCY RESPONSE TEAMS TRAINING IN PUBLIC HEALTH CRISIS - THE SERIOUSNESS OF SERIOUS GAMES.
Stanojevic, Vojislav; Stanojevic, Cedomirka
2016-07-01
The rapid development of multimedia technologies in the last twenty years has lead to the emergence of new ways of learning academic and professional skills, which implies the application of multimedia technology in the form of a software -" serious computer games". Three-Dimensional Virtual Worlds. The basis of this game-platform is made of the platform of three-dimensional virtual worlds that can be described as communication systems in which participants share the same three-dimensional virtual space within which they can move, manipulate objects and communicate through their graphical representatives- avatars. Medical Education and Training. Arguments in favor of these computer tools in the learning process are accessibility, repeatability, low cost, the use of attractive graphics and a high degree of adaptation to the user. Specifically designed avatars allow students to get adapted to their roles in certain situations, especially to those which are considered rare, dangerous or unethical in real life. Drilling of major incidents, which includes the need to create environments for training, cannot be done in the real world due to high costs'and necessity to utilize the extensive resources. In addition, it is impossible to engage all the necessary health personnel at the same time. New technologies intended for conducting training, which are also called "virtual worlds", make the following possible: training at all times depending on user's commitments; simultaneous simulations on multiple levels, in several areas, in different circumstances, including dozens of unique victims; repeated scenarios and learning from mistakes; rapid feedback and the development of non-technical skills which are critical for reducing errors in dynamic, high-risk environments. Virtual worlds, which should be the subject of further research and improvements, in the field of hospital emergency response training for mass casualty incidents, certainly have a promising future.
A case for Sandia investment in complex adaptive systems science and technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colbaugh, Richard; Tsao, Jeffrey Yeenien; Johnson, Curtis Martin
2012-05-01
This white paper makes a case for Sandia National Laboratories investments in complex adaptive systems science and technology (S&T) -- investments that could enable higher-value-added and more-robustly-engineered solutions to challenges of importance to Sandia's national security mission and to the nation. Complex adaptive systems are ubiquitous in Sandia's national security mission areas. We often ignore the adaptive complexity of these systems by narrowing our 'aperture of concern' to systems or subsystems with a limited range of function exposed to a limited range of environments over limited periods of time. But by widening our aperture of concern we could increase ourmore » impact considerably. To do so, the science and technology of complex adaptive systems must mature considerably. Despite an explosion of interest outside of Sandia, however, that science and technology is still in its youth. What has been missing is contact with real (rather than model) systems and real domain-area detail. With its center-of-gravity as an engineering laboratory, Sandia's has made considerable progress applying existing science and technology to real complex adaptive systems. It has focused much less, however, on advancing the science and technology itself. But its close contact with real systems and real domain-area detail represents a powerful strength with which to help complex adaptive systems science and technology mature. Sandia is thus both a prime beneficiary of, as well as potentially a prime contributor to, complex adaptive systems science and technology. Building a productive program in complex adaptive systems science and technology at Sandia will not be trivial, but a credible path can be envisioned: in the short run, continue to apply existing science and technology to real domain-area complex adaptive systems; in the medium run, jump-start the creation of new science and technology capability through Sandia's Laboratory Directed Research and Development program; and in the long run, inculcate an awareness at the Department of Energy of the importance of supporting complex adaptive systems science through its Office of Science.« less
Jacobs, Robin J; Caballero, Joshua; Ownby, Raymond L; Kane, Michael N
2014-11-30
Low health literacy is associated with poor medication adherence in persons with human immunodeficiency virus (HIV), which can lead to poor health outcomes. As linguistic minorities, Spanish-dominant Hispanics (SDH) face challenges such as difficulties in obtaining and understanding accurate information about HIV and its treatment. Traditional health educational methods (e.g., pamphlets, talking) may not be as effective as delivering through alternate venues. Technology-based health information interventions have the potential for being readily available on desktop computers or over the Internet. The purpose of this research was to adapt a theoretically-based computer application (initially developed for English-speaking HIV-positive persons) that will provide linguistically and culturally appropriate tailored health education to Spanish-dominant Hispanics with HIV (HIV + SDH). A mixed methods approach using quantitative and qualitative interviews with 25 HIV + SDH and 5 key informants guided by the Information-Motivation-Behavioral (IMB) Skills model was used to investigate cultural factors influencing medication adherence in HIV + SDH. We used a triangulation approach to identify major themes within cultural contexts relevant to understanding factors related to motivation to adhere to treatment. From this data we adapted an automated computer-based health literacy intervention to be delivered in Spanish. Culture-specific motivational factors for treatment adherence in HIV + SDH persons that emerged from the data were stigma, familismo (family), mood, and social support. Using this data, we developed a culturally and linguistically adapted a tailored intervention that provides information about HIV infection, treatment, and medication related problem solving skills (proven effective in English-speaking populations) that can be delivered using touch-screen computers, tablets, and smartphones to be tested in a future study. Using a theoretically-grounded Internet-based eHealth education intervention that builds on knowledge and also targets core cultural determinants of adherence may prove a highly effective approach to improve health literacy and medication decision-making in this group.
2014-01-01
Background Evidence indicates that post − stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner. Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Methods Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, ‘what works for whom and in what circumstances and respects?’ Results Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms. Conclusions Findings suggest that the theory-driven mechanisms underpinning the utilisation of feedback from computer-based technology for home-based upper-limb post-stroke rehabilitation are dependent on key elements of computer feedback and the personal and environmental context. The identification of these elements may therefore inform the development of technology; therapy education and the subsequent adoption of technology and a self-management paradigm; long-term self-managed rehabilitation; and importantly, improvements in the physical and psychosocial aspects of recovery. PMID:24903401
Parker, Jack; Mawson, Susan; Mountain, Gail; Nasr, Nasrin; Zheng, Huiru
2014-06-05
Evidence indicates that post-stroke rehabilitation improves function, independence and quality of life. A key aspect of rehabilitation is the provision of appropriate information and feedback to the learner.Advances in information and communications technology (ICT) have allowed for the development of various systems to complement stroke rehabilitation that could be used in the home setting. These systems may increase the provision of rehabilitation a stroke survivor receives and carries out, as well as providing a learning platform that facilitates long-term self-managed rehabilitation and behaviour change. This paper describes the application of an innovative evaluative methodology to explore the utilisation of feedback for post-stroke upper-limb rehabilitation in the home. Using the principles of realistic evaluation, this study aimed to test and refine intervention theories by exploring the complex interactions of contexts, mechanisms and outcomes that arise from technology deployment in the home. Methods included focus groups followed by multi-method case studies (n = 5) before, during and after the use of computer-based equipment. Data were analysed in relation to the context-mechanism-outcome hypotheses case by case. This was followed by a synthesis of the findings to answer the question, 'what works for whom and in what circumstances and respects?' Data analysis reveals that to achieve desired outcomes through the use of ICT, key elements of computer feedback, such as accuracy, measurability, rewarding feedback, adaptability, and knowledge of results feedback, are required to trigger the theory-driven mechanisms underpinning the intervention. In addition, the pre-existing context and the personal and environmental contexts, such as previous experience of service delivery, personal goals, trust in the technology, and social circumstances may also enable or constrain the underpinning theory-driven mechanisms. Findings suggest that the theory-driven mechanisms underpinning the utilisation of feedback from computer-based technology for home-based upper-limb post-stroke rehabilitation are dependent on key elements of computer feedback and the personal and environmental context. The identification of these elements may therefore inform the development of technology; therapy education and the subsequent adoption of technology and a self-management paradigm; long-term self-managed rehabilitation; and importantly, improvements in the physical and psychosocial aspects of recovery.
Test Anxiety, Computer-Adaptive Testing and the Common Core
ERIC Educational Resources Information Center
Colwell, Nicole Makas
2013-01-01
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…
Health Information Exchange as a Complex and Adaptive Construct: Scoping Review.
Akhlaq, Ather; Sheikh, Aziz; Pagliari, Claudia
2017-01-25
To understand how the concept of Health Information Exchange (HIE) has evolved over time. Supplementary analysis of data from a systematic scoping review of definitions of HIE from 1900 to 2014, involving temporal analysis of underpinning themes. The search identified 268 unique definitions of HIE dating from 1957 onwards; 103 in scientific databases and 165 in Google. These contained consistent themes, representing the core concept of exchanging health information electronically, as well as fluid themes, reflecting the evolving policy, business, organisational and technological context of HIE (including the emergence of HIE as an organisational 'entity'). These are summarised graphically to show how the concept has evolved around the world with the passage of time. The term HIE emerged in 1957 with the establishment of Occupational HIE, evolving through the 1990s with concepts such as electronic data interchange and mobile computing technology; then from 2006-10 largely aligning with the US Government's health information technology strategy and the creation of HIEs as organisational entities, alongside the broader interoperability imperative, and continuing to evolve today as part of a broader international agenda for sustainable, information-driven health systems. The concept of HIE is an evolving and adaptive one, reflecting the ongoing quest for integrated and interoperable information to improve the efficiency and effectiveness of health systems, in a changing technological and policy environment.
ERIC Educational Resources Information Center
Hopf-Weichel, Rosemarie; And Others
This report describes results of the first year of a three-year program to develop and evaluate a new Adaptive Computerized Training System (ACTS) for electronics maintenance training. (ACTS incorporates an adaptive computer program that learns the student's diagnostic and decision value structure, compares it to that of an expert, and adapts the…
Pizzolato, Claudio; Lloyd, David G.; Barrett, Rod S.; Cook, Jill L.; Zheng, Ming H.; Besier, Thor F.; Saxby, David J.
2017-01-01
Musculoskeletal tissues respond to optimal mechanical signals (e.g., strains) through anabolic adaptations, while mechanical signals above and below optimal levels cause tissue catabolism. If an individual's physical behavior could be altered to generate optimal mechanical signaling to musculoskeletal tissues, then targeted strengthening and/or repair would be possible. We propose new bioinspired technologies to provide real-time biofeedback of relevant mechanical signals to guide training and rehabilitation. In this review we provide a description of how wearable devices may be used in conjunction with computational rigid-body and continuum models of musculoskeletal tissues to produce real-time estimates of localized tissue stresses and strains. It is proposed that these bioinspired technologies will facilitate a new approach to physical training that promotes tissue strengthening and/or repair through optimal tissue loading. PMID:29093676
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Lyon, Aaron R; Wasse, Jessica Knaster; Ludwig, Kristy; Zachry, Mark; Bruns, Eric J; Unützer, Jürgen; McCauley, Elizabeth
2016-05-01
Health information technologies have become a central fixture in the mental healthcare landscape, but few frameworks exist to guide their adaptation to novel settings. This paper introduces the contextualized technology adaptation process (CTAP) and presents data collected during Phase 1 of its application to measurement feedback system development in school mental health. The CTAP is built on models of human-centered design and implementation science and incorporates repeated mixed methods assessments to guide the design of technologies to ensure high compatibility with a destination setting. CTAP phases include: (1) Contextual evaluation, (2) Evaluation of the unadapted technology, (3) Trialing and evaluation of the adapted technology, (4) Refinement and larger-scale implementation, and (5) Sustainment through ongoing evaluation and system revision. Qualitative findings from school-based practitioner focus groups are presented, which provided information for CTAP Phase 1, contextual evaluation, surrounding education sector clinicians' workflows, types of technologies currently available, and influences on technology use. Discussion focuses on how findings will inform subsequent CTAP phases, as well as their implications for future technology adaptation across content domains and service sectors.
An Infrastructure to Enable Lightweight Context-Awareness for Mobile Users
Curiel, Pablo; Lago, Ana B.
2013-01-01
Mobile phones enable us to carry out a wider range of tasks every day, and as a result they have become more ubiquitous than ever. However, they are still more limited in terms of processing power and interaction capabilities than traditional computers, and the often distracting and time-constricted scenarios in which we use them do not help in alleviating these limitations. Context-awareness is a valuable technique to address these issues, as it enables to adapt application behaviour to each situation. In this paper we present a context management infrastructure for mobile environments, aimed at controlling context information life-cycle in this kind of scenarios, with the main goal of enabling application and services to adapt their behaviour to better meet end-user needs. This infrastructure relies on semantic technologies and open standards to improve interoperability, and is based on a central element, the context manager. This element acts as a central context repository and takes most of the computational burden derived from dealing with this kind of information, thus relieving from these tasks to more resource-scarce devices in the system. PMID:23899932
NASA Astrophysics Data System (ADS)
Tsai, Chun-Wei; Lyu, Bo-Han; Wang, Chen; Hung, Cheng-Chieh
2017-05-01
We have already developed multi-function and easy-to-use modulation software that was based on LabVIEW system. There are mainly four functions in this modulation software, such as computer generated holograms (CGH) generation, CGH reconstruction, image trimming, and special phase distribution. Based on the above development of CGH modulation software, we could enhance the performance of liquid crystal on silicon - spatial light modulator (LCoSSLM) as similar as the diffractive optical element (DOE) and use it on various adaptive optics (AO) applications. Through the development of special phase distribution, we are going to use the LCoS-SLM with CGH modulation software into AO technology, such as optical microscope system. When the LCOS-SLM panel is integrated in an optical microscope system, it could be placed on the illumination path or on the image forming path. However, LCOS-SLM provides a program-controllable liquid crystal array for optical microscope. It dynamically changes the amplitude or phase of light and gives the obvious advantage, "Flexibility", to the system
A versatile system for the rapid collection, handling and graphics analysis of multidimensional data
NASA Astrophysics Data System (ADS)
O'Brien, P. M.; Moloney, G.; O'Connor, A.; Legge, G. J. F.
1993-05-01
The aim of this work was to provide a versatile system for handling multiparameter data that may arise from a variety of experiments — nuclear, AMS, microprobe elemental analysis, 3D microtomography etc. Some of the most demanding requirements arise in the application of microprobes to quantitative elemental mapping and to microtomography. A system to handle data from such experiments had been under continuous development and use at MARC for the past 15 years. It has now been made adaptable to the needs of multiparameter (or single parameter) experiments in general. The original system has been rewritten, greatly expanded and made much more powerful and faster, by use of modern computer technology — a VME bus computer with a real time operating system and a RISC workstation running Unix and the X Window system. This provides the necessary (i) power, speed and versatility, (ii) expansion and updating capabilities (iii) standardisation and adaptability, (iv) coherent modular programming structures, (v) ability to interface to other programs and (vi) transparent operation with several levels, involving the use of menus, programmed function keys and powerful macro programming facilities.
Target recognition based on the moment functions of radar signatures
NASA Astrophysics Data System (ADS)
Kim, Kyung-Tae; Kim, Hyo-Tae
2002-03-01
In this paper, we present the results of target recognition research based on the moment functions of various radar signatures, such as time-frequency signatures, range profiles, and scattering centers. The proposed approach utilizes geometrical moments or central moments of the obtained radar signatures. In particular, we derived exact and closed form expressions of the geometrical moments of the adaptive Gaussian representation (AGR), which is one of the adaptive joint time-frequency techniques, and also computed the central moments of range profiles and one-dimensional (1-D) scattering centers on a target, which are obtained by various super-resolution techniques. The obtained moment functions are further processed to provide small dimensional and redundancy-free feature vectors, and classified via a neural network approach or a Bayes classifier. The performances of the proposed technique are demonstrated using a simulated radar cross section (RCS) data set, or a measured RCS data set of various scaled aircraft models, obtained at the Pohang University of Science and Technology (POSTECH) compact range facility. Results show that the techniques in this paper can not only provide reliable classification accuracy, but also save computational resources.
Life-Span Differences in the Uses and Gratifications of Tablets: Implications for Older Adults
Magsamen-Conrad, Kate; Dowd, John; Abuljadail, Mohammad; Alsulaiman, Saud; Shareefi, Adnan
2015-01-01
This study extends Uses and Gratifications theory by examining the uses and gratifications of a new technological device, the tablet computer, and investigating the differential uses and gratifications of tablet computers across the life-span. First, we utilized a six-week tablet training intervention to adapt and extend existing measures to the tablet as a technological device. Next, we used paper-based and online surveys (N=847), we confirmed four main uses of tablets: 1) Information Seeking, 2) Relationship Maintenance, 3) Style, 4) Amusement and Killing time, and added one additional use category 5) Organization. We discovered differences among the five main uses of tablets across the life-span, with older adults using tablets the least overall. Builders, Boomers, GenX and GenY all reported the highest means for information seeking. Finally, we used a structural equation model to examine how uses and gratifications predicts hours of tablet use. The study provides limitations and suggestions for future research and marketers. In particular, this study offers insight to the relevancy of theory as it applies to particular information and communication technologies and consideration of how different periods in the life-span affect tablet motivations. PMID:26113769
Life-Span Differences in the Uses and Gratifications of Tablets: Implications for Older Adults.
Magsamen-Conrad, Kate; Dowd, John; Abuljadail, Mohammad; Alsulaiman, Saud; Shareefi, Adnan
2015-11-01
This study extends Uses and Gratifications theory by examining the uses and gratifications of a new technological device, the tablet computer, and investigating the differential uses and gratifications of tablet computers across the life-span. First, we utilized a six-week tablet training intervention to adapt and extend existing measures to the tablet as a technological device. Next, we used paper-based and online surveys ( N =847), we confirmed four main uses of tablets: 1) Information Seeking, 2) Relationship Maintenance, 3) Style, 4) Amusement and Killing time, and added one additional use category 5) Organization. We discovered differences among the five main uses of tablets across the life-span, with older adults using tablets the least overall. Builders, Boomers, GenX and GenY all reported the highest means for information seeking. Finally, we used a structural equation model to examine how uses and gratifications predicts hours of tablet use. The study provides limitations and suggestions for future research and marketers. In particular, this study offers insight to the relevancy of theory as it applies to particular information and communication technologies and consideration of how different periods in the life-span affect tablet motivations.
A psychotechnological review on eye-tracking systems: towards user experience.
Mele, Maria Laura; Federici, Stefano
2012-07-01
The aim of the present work is to show a critical review of the international literature on eye-tracking technologies by focusing on those features that characterize them as 'psychotechnologies'. A critical literature review was conducted through the main psychology, engineering, and computer sciences databases by following specific inclusion and exclusion criteria. A total of 46 matches from 1998 to 2010 were selected for content analysis. Results have been divided into four broad thematic areas. We found that, although there is a growing attention to end-users, most of the studies reviewed in this work are far from being considered as adopting holistic human-computer interaction models that include both individual differences and needs of users. User is often considered only as a measurement object of the functioning of the technological system and not as a real alter-ego of the intrasystemic interaction. In order to fully benefit from the communicative functions of gaze, the research on eye-tracking must emphasize user experience. Eye-tracking systems would become an effective assistive technology for integration, adaptation and neutralization of the environmental barrier only when a holistic model can be applied for both design processes and assessment of the functional components of the interaction.
Improvements in Routing for Packet-Switched Networks
1975-02-18
PROGRAM FOR COMPUTER SIMULATION . . 90 B.l Flow Diagram of Adaptive Routine 90 B.2 Progiam ARPSIM 93 B.3 Explanation of Variables...equa. 90 APPENDIX B ADAPTIVE ROUTING PROGRAM FOR COMPUTER SIMULA HON The computer simulation for adaptive routing was initially run on a DDP-24 small...TRANSMIT OVER AVAILABLE LINKS MESSAGES IN QUEUE COMPUTE Ni NUMBER OF ARRIVALS AT EACH NODE i AT TIME T Fig. Bla - Flow Diagram of Program Routine 92
Rosser, Benjamin A; McCullagh, Paul; Davies, Richard; Mountain, Gail A; McCracken, Lance; Eccleston, Christopher
2011-04-01
Adapting therapeutic practice from traditional face-to-face exchange to remote technology-based delivery presents challenges for the therapist, patient, and technical writer. This article documents the process of therapy adaptation and the resultant specification for the SMART2 project-a technology-based self-management system for assisting long-term health conditions, including chronic pain. Focus group discussions with healthcare professionals and patients were conducted to inform selection of therapeutic objectives and appropriate technology. Pertinent challenges are identified, relating to (1) reduction and definition of therapeutic objectives, and (2) how to approach adaptation of therapy to a form suited to technology delivery. The requirement of the system to provide dynamic and intelligent responses to patient experience and behavior is also emphasized. Solutions to these challenges are described in the context of the SMART2 technology-based intervention. More explicit discussion and documentation of therapy adaptation to technology-based delivery within the literature is encouraged.
Silverthorne, C; Wang, T H
2001-07-01
The present study was an evaluation of the impact of Taiwanese leadership styles on the productivity of Taiwanese business organizations. Specifically, it looked at the impact that both adaptive and nonadaptive leaders have on 6 measures of productivity: absenteeism, turnover rate, quality of work, reject rates, profitability, and units produced. The results indicated that the greater the level of adaptability, the more productive the organization is likely to be. Although not all of the computed correlations were statistically significant, they were all in the predicted directions. In particular, the findings for units produced and reject rates were consistently statistically significant. The study was also an examination of the usefulness of the Leadership Effectiveness and Adaptability Description (LEAD) questionnaire (P. Hersey & K. Blanchard, 1988), which appeared to be an accurate predictor of adaptability and valid for use in Taiwan. The final part of this study was an investigation of whether successful companies were more likely to have a greater percentage of adaptive leaders than unsuccessful companies. The data supported this expectation, although it is suggested that caution be used in the interpretation of this particular finding because it could have several different explanations. Overall, the evidence supported the value of adaptive leadership styles in high-technology industries in Taiwan.
Computerized Adaptive Assessment of Cognitive Abilities among Disabled Adults.
ERIC Educational Resources Information Center
Engdahl, Brian
This study examined computerized adaptive testing and cognitive ability testing of adults with cognitive disabilities. Adult subjects (N=250) were given computerized tests on language usage and space relations in one of three administration conditions: paper and pencil, fixed length computer adaptive, and variable length computer adaptive.…
Adaptive efficient compression of genomes
2012-01-01
Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997
NASA Astrophysics Data System (ADS)
Tian, Shudong; Han, Jun; Yang, Jianwei; Zeng, Xiaoyang
2017-10-01
Electrocardiogram (ECG) can be used as a valid way for diagnosing heart disease. To fulfill ECG processing in wearable devices by reducing computation complexity and hardware cost, two kinds of adaptive filters are designed to perform QRS complex detection and motion artifacts removal, respectively. The proposed design achieves a sensitivity of 99.49% and a positive predictivity of 99.72%, tested under the MIT-BIH ECG database. The proposed design is synthesized under the SMIC 65-nm CMOS technology and verified by post-synthesis simulation. Experimental results show that the power consumption and area cost of this design are of 160 μW and 1.09 × 10 5 μm2, respectively. Project supported by the National Natural Science Foundation of China (Nos. 61574040, 61234002, 61525401).
Distributed Aviation Concepts and Technologies
NASA Technical Reports Server (NTRS)
Moore, Mark D.
2008-01-01
Aviation has experienced one hundred years of evolution, resulting in the current air transportation system dominated by commercial airliners in a hub and spoke infrastructure. While the first fifty years involved disruptive technologies that required frequent vehicle adaptation, the second fifty years produced a stable evolutionary optimization of decreasing costs with increasing safety. This optimization has resulted in traits favoring a centralized service model with high vehicle productivity and cost efficiency. However, it may also have resulted in a system that is not sufficiently robust to withstand significant system disturbances. Aviation is currently facing rapid change from issues such as environmental damage, terrorism threat, congestion and capacity limitations, and cost of energy. Currently, these issues are leading to a loss of service for weaker spoke markets. These catalysts and a lack of robustness could result in a loss of service for much larger portions of the aviation market. The impact of other competing transportation services may be equally important as casual factors of change. Highway system forecasts indicate a dramatic slow down as congestion reaches a point of non-linearly increasing delay. In the next twenty-five years, there is the potential for aviation to transform itself into a more robust, scalable, adaptive, secure, safe, affordable, convenient, efficient and environmentally friendly system. To achieve these characteristics, the new system will likely be based on a distributed model that enables more direct services. Short range travel is already demonstrating itself to be inefficient with a centralized model, providing opportunities for emergent distributed services through air-taxi models. Technologies from the on-demand revolution in computers and communications are now available as major drivers for aviation on-demand adaptation. Other technologies such as electric propulsion are currently transforming the automobile industry, and will also significantly alter the functionality of future distributed aviation concepts. Many hurdles exist, including technology, regulation, and perception. Aviation has an inherent governmental role not present in other recent on-demand transformations, which may pose a risk of curtailing aviation democratization .
A Framework for Integration of IVHM Technologies for Intelligent Integration for Vehicle Management
NASA Technical Reports Server (NTRS)
Paris, Deidre E.; Trevino, Luis; Watson, Mike
2005-01-01
As a part of the overall goal of developing Integrated Vehicle Health Management (IVHM) systems for aerospace vehicles, the NASA Faculty Fellowship Program (NFFP) at Marshall Space Flight Center has performed a pilot study on IVHM principals which integrates researched IVHM technologies in support of Integrated Intelligent Vehicle Management (IIVM). IVHM is the process of assessing, preserving, and restoring system functionality across flight and ground systems (NASA NGLT 2004). The framework presented in this paper integrates advanced computational techniques with sensor and communication technologies for spacecraft that can generate responses through detection, diagnosis, reasoning, and adapt to system faults in support of IIVM. These real-time responses allow the IIVM to modify the effected vehicle subsystem(s) prior to a catastrophic event. Furthermore, the objective of this pilot program is to develop and integrate technologies which can provide a continuous, intelligent, and adaptive health state of a vehicle and use this information to improve safety and reduce costs of operations. Recent investments in avionics, health management, and controls have been directed towards IIVM. As this concept has matured, it has become clear the IIVM requires the same sensors and processing capabilities as the real-time avionics functions to support diagnosis of subsystem problems. New sensors have been proposed, in addition, to augment the avionics sensors to support better system monitoring and diagnostics. As the designs have been considered, a synergy has been realized where the real-time avionics can utilize sensors proposed for diagnostics and prognostics to make better real-time decisions in response to detected failures. IIVM provides for a single system allowing modularity of functions and hardware across the vehicle. The framework that supports IIVM consists of 11 major on-board functions necessary to fully manage a space vehicle maintaining crew safety and mission objectives: Guidance and Navigation; Communications and Tracking; Vehicle Monitoring; Information Transport and Integration; Vehicle Diagnostics; Vehicle Prognostics; Vehicle mission Planning; Automated Repair and Replacement; Vehicle Control; Human Computer Interface; and Onboard Verification and Validation. Furthermore, the presented framework provides complete vehicle management which not only allows for increased crew safety and mission success through new intelligence capabilities, but also yields a mechanism for more efficient vehicle operations. The representative IVHM technologies for IIVH includes: 1) robust controllers for use in re-usable launch vehicles, 2) scaleable/flexible computer platform using heterogeneous communication, 3) coupled electromagnetic oscillators for enhanced communications, 4) Linux-based real-time systems, 5) genetic algorithms, 6) Bayesian Networks, 7) evolutionary algorithms, 8) dynamic systems control modeling, and 9) advanced sensing capabilities. This paper presents IVHM technologies developed under NASA's NFFP pilot project. The integration of these IVHM technologies forms the framework for IIVM.
Learner-Adaptive Educational Technology for Simulation in Healthcare: Foundations and Opportunities.
Lineberry, Matthew; Dev, Parvati; Lane, H Chad; Talbot, Thomas B
2018-06-01
Despite evidence that learners vary greatly in their learning needs, practical constraints tend to favor ''one-size-fits-all'' educational approaches, in simulation-based education as elsewhere. Adaptive educational technologies - devices and/or software applications that capture and analyze relevant data about learners to select and present individually tailored learning stimuli - are a promising aid in learners' and educators' efforts to provide learning experiences that meet individual needs. In this article, we summarize and build upon the 2017 Society for Simulation in Healthcare Research Summit panel discussion on adaptive learning. First, we consider the role of adaptivity in learning broadly. We then outline the basic functions that adaptive learning technologies must implement and the unique affordances and challenges of technology-based approaches for those functions, sharing an illustrative example from healthcare simulation. Finally, we consider future directions for accelerating research, development, and deployment of effective adaptive educational technology and techniques in healthcare simulation.
A new decision sciences for complex systems.
Lempert, Robert J
2002-05-14
Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.
Rural women, technology, and self-management of chronic illness.
Weinert, Clarann; Cudney, Shirley; Hill, Wade G
2008-09-01
The objective of this study was to determine the differences in the psychosocial status of 3 groups of chronically ill rural women participating in a computer intervention. The 3 groups were: intense intervention, less-intense intervention, and control. At baseline and following the intervention, measures were taken for social support, self-esteem, empowerment, self-efficacy, depression, stress, and loneliness. ANCOVA results showed group differences for social support and self-efficacy among the overall group. The findings differed for a vulnerable subgroup, with significant between-group differences for social support and loneliness. It was concluded that a computer-delivered intervention can improve social support and self-efficacy and reduce loneliness in rural women, enhancing their ability to self-manage and adapt to chronic illness.
Computer control of a robotic satellite servicer
NASA Technical Reports Server (NTRS)
Fernandez, K. R.
1980-01-01
The advantages that will accrue from the in-orbit servicing of satellites are listed. It is noted that in a concept in satellite servicing which holds promise as a compromise between the high flexibility and adaptability of manned vehicles and the lower cost of an unmanned vehicle involves an unmanned servicer carrying a remotely supervised robotic manipulator arm. Because of deficiencies in sensor technology, robot servicing would require that satellites be designed according to a modular concept. A description is given of the servicer simulation hardware, the computer and interface hardware, and the software. It is noted that several areas require further development; these include automated docking, modularization of satellite design, reliable connector and latching mechanisms, development of manipulators for space environments, and development of automated diagnostic techniques.
Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene
2010-01-01
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602
Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; Romero-Troncoso, Rene de Jesus
2010-01-01
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael J. Bockelie
2002-01-04
This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requiresmore » a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International; -mesh adaption, data management, and parallelization software and technology being developed by users of the BoxLib library at LBNL; and -solution methods for problems formulated on block structured grids that were being developed in collaboration with technical staff members at the University of Utah Center for High Performance Computing (CHPC) and at LBNL. The combustion modeling software used by Reaction Engineering International represents an investment of over fifty man-years of development, conducted over a period of twenty years. Thus, it was impractical to achieve our objective by starting from scratch. The research program resulted in an adaptive grid, reacting CFD flow solver that can be used only on limited problems. In current form the code is appropriate for use on academic problems with simplified geometries. The new solver is not sufficiently robust or sufficiently general to be used in a ''production mode'' for industrial applications. The principle difficulty lies with the multi-level solver technology. The use of multi-level solvers on adaptive grids with embedded boundaries is not yet a mature field and there are many issues that remain to be resolved. From the lessons learned in this SBIR program, we have started work on a new flow solver with an AMR capability. The new code is based on a conventional cell-by-cell mesh refinement strategy used in unstructured grid solvers that employ hexahedral cells. The new solver employs several of the concepts and solution strategies developed within this research program. The formulation of the composite grid problem for the new solver has been designed to avoid the embedded boundary complications encountered in this SBIR project. This follow-on effort will result in a reacting flow CFD solver with localized mesh capability that can be used to perform engineering calculations on industrial problems in a production mode.« less
From astronomy and telecommunications to biomedicine
NASA Astrophysics Data System (ADS)
Behr, Bradford B.; Baker, Scott A.; Bismilla, Yusuf; Cenko, Andrew T.; DesRoches, Brandon; Hajian, Arsen R.; Meade, Jeffrey T.; Nitkowski, Arthur; Preston, Kyle J.; Schmidt, Bradley S.; Sherwood-Droz, Nicolás.; Slaa, Jared
2015-03-01
Photonics is an inherently interdisciplinary endeavor, as technologies and techniques invented or developed in one scientific field are often found to be applicable to other fields or disciplines. We present two case studies in which optical spectroscopy technologies originating from stellar astrophysics and optical telecommunications multiplexing have been successfully adapted for biomedical applications. The first case involves a design concept called the High Throughput Virtual Slit, or HTVS, which provides high spectral resolution without the throughput inefficiency typically associated with a narrow spectrometer slit. HTVS-enhanced spectrometers have been found to significantly improve the sensitivity and speed of fiber-fed Raman analysis systems, and the method is now being adapted for hyperspectral imaging for medical and biological sensing. The second example of technology transfer into biomedicine centers on integrated optics, in which optical waveguides are fabricated on to silicon substrates in a substantially similar fashion as integrated circuits in computer chips. We describe an architecture referred to as OCTANE which implements a small and robust "spectrometer-on-a-chip" which is optimized for optical coherence tomography (OCT). OCTANE-based OCT systems deliver three-dimensional imaging resolution at the micron scale with greater stability and lower cost than equivalent conventional OCT approaches. Both HTVS and OCTANE enable higher precision and improved reliability under environmental conditions that are typically found in a clinical or laboratory setting.
A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework
NASA Astrophysics Data System (ADS)
Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco
2017-11-01
We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.
Thibodeau, Linda
2014-06-01
The purpose of this study was to compare the benefits of 3 types of remote microphone hearing assistance technology (HAT), adaptive digital broadband, adaptive frequency modulation (FM), and fixed FM, through objective and subjective measures of speech recognition in clinical and real-world settings. Participants included 11 adults, ages 16 to 78 years, with primarily moderate-to-severe bilateral hearing impairment (HI), who wore binaural behind-the-ear hearing aids; and 15 adults, ages 18 to 30 years, with normal hearing. Sentence recognition in quiet and in noise and subjective ratings were obtained in 3 conditions of wireless signal processing. Performance by the listeners with HI when using the adaptive digital technology was significantly better than that obtained with the FM technology, with the greatest benefits at the highest noise levels. The majority of listeners also preferred the digital technology when listening in a real-world noisy environment. The wireless technology allowed persons with HI to surpass persons with normal hearing in speech recognition in noise, with the greatest benefit occurring with adaptive digital technology. The use of adaptive digital technology combined with speechreading cues would allow persons with HI to engage in communication in environments that would have otherwise not been possible with traditional wireless technology.
Recent advances in radiation oncology.
Garibaldi, Cristina; Jereczek-Fossa, Barbara Alicja; Marvaso, Giulia; Dicuonzo, Samantha; Rojas, Damaris Patricia; Cattani, Federica; Starzyńska, Anna; Ciardo, Delia; Surgo, Alessia; Leonardi, Maria Cristina; Ricotti, Rosalinda
2017-01-01
Radiotherapy (RT) is very much a technology-driven treatment modality in the management of cancer. RT techniques have changed significantly over the past few decades, thanks to improvements in engineering and computing. We aim to highlight the recent developments in radiation oncology, focusing on the technological and biological advances. We will present state-of-the-art treatment techniques, employing photon beams, such as intensity-modulated RT, volumetric-modulated arc therapy, stereotactic body RT and adaptive RT, which make possible a highly tailored dose distribution with maximum normal tissue sparing. We will analyse all the steps involved in the treatment: imaging, delineation of the tumour and organs at risk, treatment planning and finally image-guidance for accurate tumour localisation before and during treatment delivery. Particular attention will be given to the crucial role that imaging plays throughout the entire process. In the case of adaptive RT, the precise identification of target volumes as well as the monitoring of tumour response/modification during the course of treatment is mainly based on multimodality imaging that integrates morphological, functional and metabolic information. Moreover, real-time imaging of the tumour is essential in breathing adaptive techniques to compensate for tumour motion due to respiration. Brief reference will be made to the recent spread of particle beam therapy, in particular to the use of protons, but also to the yet limited experience of using heavy particles such as carbon ions. Finally, we will analyse the latest biological advances in tumour targeting. Indeed, the effectiveness of RT has been improved not only by technological developments but also through the integration of radiobiological knowledge to produce more efficient and personalised treatment strategies.
Intraoral Scanner Technologies: A Review to Make a Successful Impression
Richert, Raphaël; Goujat, Alexis; Venet, Laurent; Viguie, Gilbert; Viennot, Stéphane; Robinson, Philip; Farges, Jean-Christophe; Fages, Michel
2017-01-01
To overcome difficulties associated with conventional techniques, impressions with IOS (intraoral scanner) and CAD/CAM (computer-aided design and manufacturing) technologies were developed for dental practice. The last decade has seen an increasing number of optical IOS devices, and these are based on different technologies; the choice of which may impact on clinical use. To allow informed choice before purchasing or renewing an IOS, this article summarizes first the technologies currently used (light projection, distance object determination, and reconstruction). In the second section, the clinical considerations of each strategy such as handling, learning curve, powdering, scanning paths, tracking, and mesh quality are discussed. The last section is dedicated to the accuracy of files and of the intermaxillary relationship registered with IOS as the rendering of files in the graphical user interface is often misleading. This overview leads to the conclusion that the current IOS is adapted for a common practice, although differences exist between the technologies employed. An important aspect highlighted in this review is the reduction in the volume of hardware which has led to an increase in the importance of software-based technologies. PMID:29065652
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
Ontology-based prediction of surgical events in laparoscopic surgery
NASA Astrophysics Data System (ADS)
Katić, Darko; Wekerle, Anna-Laura; Gärtner, Fabian; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie
2013-03-01
Context-aware technologies have great potential to help surgeons during laparoscopic interventions. Their underlying idea is to create systems which can adapt their assistance functions automatically to the situation in the OR, thus relieving surgeons from the burden of managing computer assisted surgery devices manually. To this purpose, a certain kind of understanding of the current situation in the OR is essential. Beyond that, anticipatory knowledge of incoming events is beneficial, e.g. for early warnings of imminent risk situations. To achieve the goal of predicting surgical events based on previously observed ones, we developed a language to describe surgeries and surgical events using Description Logics and integrated it with methods from computational linguistics. Using n-Grams to compute probabilities of followup events, we are able to make sensible predictions of upcoming events in real-time. The system was evaluated on professionally recorded and labeled surgeries and showed an average prediction rate of 80%.
NASA Astrophysics Data System (ADS)
Tatnall, Arthur; Burgess, Stephen
Information Systems (IS) courses began in Australia’s higher education institutions in the 1960, and have continued to evolve at a rapid rate since then. Beginning with a need by the Australian Commonwealth Government for a large number of computer professionals, Information Systems (or Business Computing) courses developed rapidly. The nature and content of these courses in the 1960s and 70s, however, was quite different to present courses and this paper traces this change and the reasons for it. After some brief discussion of the beginnings and the early days of Information Systems curriculum, we address in particular how these courses have evolved in one Australian university over the last 25 years. IS curriculum is seen to adapt, new materials are added and emphases changed as new technologies and new computing applications emerge. The paper offers a model of how curriculum change in Information Systems takes place.
Occupational risk identification using hand-held or laptop computers.
Naumanen, Paula; Savolainen, Heikki; Liesivuori, Jyrki
2008-01-01
This paper describes the Work Environment Profile (WEP) program and its use in risk identification by computer. It is installed into a hand-held computer or a laptop to be used in risk identification during work site visits. A 5-category system is used to describe the identified risks in 7 groups, i.e., accidents, biological and physical hazards, ergonomic and psychosocial load, chemicals, and information technology hazards. Each group contains several qualifying factors. These 5 categories are colour-coded at this stage to aid with visualization. Risk identification produces visual summary images the interpretation of which is facilitated by colours. The WEP program is a tool for risk assessment which is easy to learn and to use both by experts and nonprofessionals. It is especially well adapted to be used both in small and in larger enterprises. Considerable time is saved as no paper notes are needed.
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
Müller, Andre Matthias; Blandford, Ann; Yardley, Lucy
2017-01-01
Low physical activity and high sedentary behavior in older adults can be addressed with interventions that are delivered through modern technology. Just-In-Time Adaptive Interventions (JITAIs) are an emerging technology-driven behavior-change intervention type and capitalize on data that is collected via mobile sensing technology (e.g., smartphones) to trigger appropriate support in real-life. In this paper we integrated behavior change and aging theory and research as well as knowledge around older adult's technology use to conceptualize a JITAI targeting the reduction of sedentary behavior in older adults. The JITAIs ultimate goal is to encourage older adults to take regular activity breaks from prolonged sitting. As a proximal outcome, we suggest the number of daily activity breaks from sitting. Support provided to interrupt sitting time can be based on tailoring variables: (I) the current accumulated sitting time; (II) the location of the individual; (III) the time of the day; (IV) the frequency of daily support prompts; and (V) the response to previous support prompts. Data on these variables can be collected using sensors that are commonly inbuilt into smartphones (e.g., accelerometer, GPS). Support prompts might be best delivered via traditional text messages as older adults are usually familiar and comfortable with this function. The content of the prompts should encourage breaks from prolonged sitting by highlighting immediate benefits of sitting time interruptions. Additionally, light physical activities that could be done during the breaks should also be presented (e.g., walking into the kitchen to prepare a cup of tea). Although the conceptualized JITAI can be developed and implemented to test its efficacy, more work is required to identify ways to collect, aggregate, organize and immediately use dense data on the proposed and other potentially important tailoring variables. Machine learning and other computational modelling techniques commonly used by computer scientists and engineers appear promising. With this, to develop powerful JITAIs and to actualize the full potential of modern sensing technologies transdisciplinary approaches are required.
Blandford, Ann; Yardley, Lucy
2017-01-01
Low physical activity and high sedentary behavior in older adults can be addressed with interventions that are delivered through modern technology. Just-In-Time Adaptive Interventions (JITAIs) are an emerging technology-driven behavior-change intervention type and capitalize on data that is collected via mobile sensing technology (e.g., smartphones) to trigger appropriate support in real-life. In this paper we integrated behavior change and aging theory and research as well as knowledge around older adult's technology use to conceptualize a JITAI targeting the reduction of sedentary behavior in older adults. The JITAIs ultimate goal is to encourage older adults to take regular activity breaks from prolonged sitting. As a proximal outcome, we suggest the number of daily activity breaks from sitting. Support provided to interrupt sitting time can be based on tailoring variables: (I) the current accumulated sitting time; (II) the location of the individual; (III) the time of the day; (IV) the frequency of daily support prompts; and (V) the response to previous support prompts. Data on these variables can be collected using sensors that are commonly inbuilt into smartphones (e.g., accelerometer, GPS). Support prompts might be best delivered via traditional text messages as older adults are usually familiar and comfortable with this function. The content of the prompts should encourage breaks from prolonged sitting by highlighting immediate benefits of sitting time interruptions. Additionally, light physical activities that could be done during the breaks should also be presented (e.g., walking into the kitchen to prepare a cup of tea). Although the conceptualized JITAI can be developed and implemented to test its efficacy, more work is required to identify ways to collect, aggregate, organize and immediately use dense data on the proposed and other potentially important tailoring variables. Machine learning and other computational modelling techniques commonly used by computer scientists and engineers appear promising. With this, to develop powerful JITAIs and to actualize the full potential of modern sensing technologies transdisciplinary approaches are required PMID:29184889
NASA Astrophysics Data System (ADS)
Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar
2002-05-01
Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.
Fault recovery for real-time, multi-tasking computer system
NASA Technical Reports Server (NTRS)
Hess, Richard (Inventor); Kelly, Gerald B. (Inventor); Rogers, Randy (Inventor); Stange, Kent A. (Inventor)
2011-01-01
System and methods for providing a recoverable real time multi-tasking computer system are disclosed. In one embodiment, a system comprises a real time computing environment, wherein the real time computing environment is adapted to execute one or more applications and wherein each application is time and space partitioned. The system further comprises a fault detection system adapted to detect one or more faults affecting the real time computing environment and a fault recovery system, wherein upon the detection of a fault the fault recovery system is adapted to restore a backup set of state variables.
Nebeker, Camille; Harlow, John; Espinoza Giacinto, Rebeca; Orozco-Linares, Rubi; Bloss, Cinnamon S; Weibel, Nadir
2017-01-01
Vast quantities of personal health information and private identifiable information are being created through mobile apps, wearable sensors, and social networks. While new strategies and tools for obtaining health data have expanded researchers' abilities to design and test personalized and adaptive health interventions, the deployment of pervasive sensing and computational techniques to gather research data is raising ethical challenges for Institutional Review Boards (IRBs) charged with protecting research participants. To explore experiences with, and perceptions about, technology-enabled research, and identify solutions for promoting responsible conduct of this research we conducted focus groups with human research protection program and IRB affiliates. Our findings outline the need for increased collaboration across stakeholders in terms of: (1) shared and dynamic resources that improve awareness of technologies and decrease potential threats to participant privacy and data confidentiality, and (2) development of appropriate and dynamic standards through collaboration with stakeholders in the research ethics community.
Espinosa, J A; Kosnik, L K; Kraitsik, M; Dillow, J C
1997-01-01
In our efforts to reduce cardiac morbidity and mortality we often use terms such as the "battle" or "war" on heart disease. If we believe efforts to reduce cardiac disease are the moral equivalent of war, then perhaps we should explore ways that military strategic and tactical metaphors can be applied through technology to the cardiac battle. In this article we explore three major areas for technological advancement: adaptation of the strategies of outcomes management and evidence-based medicine, computer simulation and animation efforts to create horizontal and vertical integration of strategic efforts, and use of interactive multimedia in "recruiting an army" through community empowerment. The overall goal is to find ways to lift "the fog of war" in the battle on heart disease, in order to further the integration of our various efforts.
Advanced teleoperation: Technology innovations and applications
NASA Technical Reports Server (NTRS)
Schenker, Paul S.; Bejczy, Antal K.; Kim, Won S.
1994-01-01
The capability to remotely, robotically perform space assembly, inspection, servicing, and science functions would rapidly expand our presence in space, and the cost efficiency of being there. There is considerable interest in developing 'telerobotic' technologies, which also have comparably important terrestrial applications to health care, underwater salvage, nuclear waste remediation and other. Such tasks, both space and terrestrial, require both a robot and operator interface that is highly flexible and adaptive, i.e., capable of efficiently working in changing and often casually structured environments. One systems approach to this requirement is to augment traditional teleoperation with computer assists -- advanced teleoperation. We have spent a number of years pursuing this approach, and highlight some key technology developments and their potential commercial impact. This paper is an illustrative summary rather than self-contained presentation; for completeness, we include representative technical references to our work which will allow the reader to follow up items of particular interest.
Investigation of Integrated Vehicle Health Management Approaches
NASA Technical Reports Server (NTRS)
Paris, Deidre
2005-01-01
This report is to present the work that was performed during the summer in the Advance Computing Application office. The NFFP (NASA Faculty Fellow Program) had ten summer faculty members working on IVHM (Integrated Vehicle Health Management) technologies. The objective of this project was two-fold: 1) to become familiar with IVHM concepts and key demonstrated IVHM technologies; and 2) to integrate the research that has been performed by IVHM faculty members into the MASTLAB (Marshall Avionic Software Test Lab). IVHM is a NASA-wide effort to coordinate, integrate and apply advanced software, sensors and design technologies to increase the level of intelligence, autonomy, and health state of future vehicles. IVHM is an important concept because it is consistent with the current plan for NASA to go to the moon, mars, and beyond. In order for NASA to become more involved in deep exploration, avionic systems will need to be highly adaptable and autonomous.
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
Innovations in healthcare information systems promise to revolutionize and streamline healthcare processes worldwide. However, the complexity of these systems and the need to better understand issues related to human-computer interaction have slowed progress in this area. In this chapter the authors describe their work in using methods adapted from usability engineering, video ethnography and analysis of digital log files for improving our understanding of complex real-world healthcare interactions between humans and technology. The approaches taken are cost-effective and practical and can provide detailed ethnographic data on issues health professionals and consumers encounter while using systems as well as potential safety problems. The work is important in that it can be used in techno-anthropology to characterize complex user interactions with technologies and also to provide feedback into redesign and optimization of improved healthcare information systems.
How to Represent Adaptation in e-Learning with IMS Learning Design
ERIC Educational Resources Information Center
Burgos, Daniel; Tattersall, Colin; Koper, Rob
2007-01-01
Adaptation in e-learning has been an important research topic for the last few decades in computer-based education. In adaptivity the behaviour of the user triggers some actions in the system that guides the learning process. In adaptability, the user makes changes and takes decisions. Progressing from computer-based training and adaptive…
Identifying Differential Item Functioning in Multi-Stage Computer Adaptive Testing
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Li, Johnson
2013-01-01
The purpose of this study is to evaluate the performance of CATSIB (Computer Adaptive Testing-Simultaneous Item Bias Test) for detecting differential item functioning (DIF) when items in the matching and studied subtest are administered adaptively in the context of a realistic multi-stage adaptive test (MST). MST was simulated using a 4-item…
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss.
Wolfe, Jace; Duke, Mila; Schafer, Erin; Jones, Christine; Rakita, Lori
2017-05-01
Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations. The objective of this study was to compare speech recognition, speech intelligibility ratings (SIRs), and sound preferences of children using hearing aids equipped with and without adaptive noise management technologies. A single-group, repeated measures design was used to evaluate performance differences obtained in four simulated environments. In each simulated environment, participants were tested in a basic listening program with minimal noise management features, a manual program designed for that scene, and the hearing instruments' adaptive operating system that steered hearing instrument parameterization based on the characteristics of the environment. Twelve children with mild to moderately severe sensorineural hearing loss. Speech recognition and SIRs were evaluated in three hearing aid programs with and without noise management technologies across two different test sessions and various listening environments. Also, the participants' perceptual hearing performance in daily real-world listening situations with two of the hearing aid programs was evaluated during a four- to six-week field trial that took place between the two laboratory sessions. On average, the use of adaptive noise management technology improved sentence recognition in noise for speech presented in front of the participant but resulted in a decrement in performance for signals arriving from behind when the participant was facing forward. However, the improvement with adaptive noise management exceeded the decrement obtained when the signal arrived from behind. Most participants reported better subjective SIRs when using adaptive noise management technologies, particularly when the signal of interest arrived from in front of the listener. In addition, most participants reported a preference for the technology with an automatically switching, adaptive directional microphone and adaptive noise reduction in real-world listening situations when compared to conventional, omnidirectional microphone use with minimal noise reduction processing. Use of the adaptive noise management technologies evaluated in this study improves school-age children's speech recognition in noise for signals arriving from the front. Although a small decrement in speech recognition in noise was observed for signals arriving from behind the listener, most participants reported a preference for use of noise management technology both when the signal arrived from in front and from behind the child. The results of this study suggest that adaptive noise management technologies should be considered for use with school-age children when listening in academic and social situations. American Academy of Audiology
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Water System Adaptation To Hydrological Changes: Module 11, Methods and Tools: Computational Models
This course will introduce students to the fundamental principles of water system adaptation to hydrological changes, with emphasis on data analysis and interpretation, technical planning, and computational modeling. Starting with real-world scenarios and adaptation needs, the co...
Management of Computer-Based Instruction: Design of an Adaptive Control Strategy.
ERIC Educational Resources Information Center
Tennyson, Robert D.; Rothen, Wolfgang
1979-01-01
Theoretical and research literature on learner, program, and adaptive control as forms of instructional management are critiqued in reference to the design of computer-based instruction. An adaptive control strategy using an online, iterative algorithmic model is proposed. (RAO)
Military application of flat panel displays in the Vetronics Technology Testbed prototype vehicle
NASA Astrophysics Data System (ADS)
Downs, Greg; Roller, Gordon; Brendle, Bruce E., Jr.; Tierney, Terrance
2000-08-01
The ground combat vehicle crew of tomorrow must be able to perform their mission more effectively and efficiently if they are to maintain dominance over ever more lethal enemy forces. Increasing performance, however, becomes even more challenging when the soldier is subject to reduced crew sizes, a never- ending requirement to adapt to ever-evolving technologies and the demand to assimilate an overwhelming array of battlefield data. This, combined with the requirement to fight with equal effectiveness at any time of the day or night in all types of weather conditions, makes it clear that this crew of tomorrow will need timely, innovative solutions to overcome this multitude of barriers if they are to achieve their objectives. To this end, the U.S. Army is pursuing advanced crew stations with human-computer interfaces that will allow the soldier to take full advantage of emerging technologies and make efficient use of the battlefield information available to him in a program entitled 'Vetronics Technology Testbed.' Two critical components of the testbed are a compliment of panoramic indirect vision displays to permit drive-by-wire and multi-function displays for managing lethality, mobility, survivability, situational awareness and command and control of the vehicle. These displays are being developed and built by Computing Devices Canada, Ltd. This paper addresses the objectives of the testbed program and the technical requirements and design of the displays.
NASA Technical Reports Server (NTRS)
Lewandowski, Leon; Struckman, Keith
1994-01-01
Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.
ROBOTICS IN HAZARDOUS ENVIRONMENTS - REAL DEPLOYMENTS BY THE SAVANNAH RIVER NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.; Tibrea, S.; Nance, T.
The Research & Development Engineering (R&DE) section in the Savannah River National Laboratory (SRNL) engineers, integrates, tests, and supports deployment of custom robotics, systems, and tools for use in radioactive, hazardous, or inaccessible environments. Mechanical and electrical engineers, computer control professionals, specialists, machinists, welders, electricians, and mechanics adapt and integrate commercially available technology with in-house designs, to meet the needs of Savannah River Site (SRS), Department of Energy (DOE), and other governmental agency customers. This paper discusses five R&DE robotic and remote system projects.
Ubiquitous Wireless Smart Sensing and Control
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools). Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
Ubiquitous Wireless Smart Sensing and Control. Pumps and Pipes JSC: Uniquely Houston
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools).Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
NASA Astrophysics Data System (ADS)
Yi, Juan; Du, Qingyu; Zhang, Hong jiang; Zhang, Yao lei
2017-11-01
Target recognition is a leading key technology in intelligent image processing and application development at present, with the enhancement of computer processing ability, autonomous target recognition algorithm, gradually improve intelligence, and showed good adaptability. Taking the airport target as the research object, analysis the airport layout characteristics, construction of knowledge model, Gabor filter and Radon transform based on the target recognition algorithm of independent design, image processing and feature extraction of the airport, the algorithm was verified, and achieved better recognition results.
IEEE 1982. Proceedings of the international conference on cybernetics and society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-01-01
The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.
Subjective Technology Adaptivity Predicts Technology Use in Old Age.
Kamin, Stefan T; Lang, Frieder R; Beyer, Anja
2017-01-01
To date, not much is known about the psychological and motivational factors underlying technology use in late life. What are the interindividual determinants that lead older adults to invest in using technological innovations despite the age-related physiological changes that impose challenges on behavioral plasticity in everyday life? This research explores interindividual differences in subjective technology adaptivity - a general technology-related motivational resource that accounts for technology use in late life. More specifically, we investigate the influence of this factor relative to demographic characteristics, personality traits, and functional limitations in a longitudinal sample of community-dwelling older adults. We report results from a paper-and-pencil survey with 136 older adults between 59 and 92 years of age (mean = 71.4, SD = 7.4). Of those participants, 77 participated in a 2-year follow-up. We assessed self-reports of technology use, subjective technology adaptivity, functional limitations, and the personality traits openness to new experiences and neuroticism. Higher levels of subjective technology adaptivity were associated with technology use at the first measurement as well as increased use over the course of 2 years. Subjective technology adaptivity is a significant predictor of technology use in old age. Our findings contribute to improving the understanding of interindividual differences when using technological innovation in late life. Moreover, our findings have implications in the context of user involvement and may contribute to the successful development of innovative technology for older adults. © 2017 S. Karger AG, Basel.
Adaptive CFD schemes for aerospace propulsion
NASA Astrophysics Data System (ADS)
Ferrero, A.; Larocca, F.
2017-05-01
The flow fields which can be observed inside several components of aerospace propulsion systems are characterised by the presence of very localised phenomena (boundary layers, shock waves,...) which can deeply influence the performances of the system. In order to accurately evaluate these effects by means of Computational Fluid Dynamics (CFD) simulations, it is necessary to locally refine the computational mesh. In this way the degrees of freedom related to the discretisation are focused in the most interesting regions and the computational cost of the simulation remains acceptable. In the present work, a discontinuous Galerkin (DG) discretisation is used to numerically solve the equations which describe the flow field. The local nature of the DG reconstruction makes it possible to efficiently exploit several adaptive schemes in which the size of the elements (h-adaptivity) and the order of reconstruction (p-adaptivity) are locally changed. After a review of the main adaptation criteria, some examples related to compressible flows in turbomachinery are presented. An hybrid hp-adaptive algorithm is also proposed and compared with a standard h-adaptive scheme in terms of computational efficiency.
Socio-Pedagogical Complex as a Pedagogical Support Technology of Students' Social Adaptation
ERIC Educational Resources Information Center
Sadovaya, Victoriya V.; Simonova, Galina I.
2016-01-01
The relevance of the problem stated in the article is determined by the need of developing technological approaches to pedagogical support of students' social adaptation. The purpose of this paper is to position the technological sequence of pedagogical support of students' social adaptation in the activities of the socio-pedagogical complex. The…
ERIC Educational Resources Information Center
Walkington, Candace A.
2013-01-01
Adaptive learning technologies are emerging in educational settings as a means to customize instruction to learners' background, experiences, and prior knowledge. Here, a technology-based personalization intervention within an intelligent tutoring system (ITS) for secondary mathematics was used to adapt instruction to students' personal interests.…
RASCAL: A Rudimentary Adaptive System for Computer-Aided Learning.
ERIC Educational Resources Information Center
Stewart, John Christopher
Both the background of computer-assisted instruction (CAI) systems in general and the requirements of a computer-aided learning system which would be a reasonable assistant to a teacher are discussed. RASCAL (Rudimentary Adaptive System for Computer-Aided Learning) is a first attempt at defining a CAI system which would individualize the learning…
DKIST Adaptive Optics System: Simulation Results
NASA Astrophysics Data System (ADS)
Marino, Jose; Schmidt, Dirk
2016-05-01
The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Biomimetic molecular design tools that learn, evolve, and adapt.
Winkler, David A
2017-01-01
A dominant hallmark of living systems is their ability to adapt to changes in the environment by learning and evolving. Nature does this so superbly that intensive research efforts are now attempting to mimic biological processes. Initially this biomimicry involved developing synthetic methods to generate complex bioactive natural products. Recent work is attempting to understand how molecular machines operate so their principles can be copied, and learning how to employ biomimetic evolution and learning methods to solve complex problems in science, medicine and engineering. Automation, robotics, artificial intelligence, and evolutionary algorithms are now converging to generate what might broadly be called in silico-based adaptive evolution of materials. These methods are being applied to organic chemistry to systematize reactions, create synthesis robots to carry out unit operations, and to devise closed loop flow self-optimizing chemical synthesis systems. Most scientific innovations and technologies pass through the well-known "S curve", with slow beginning, an almost exponential growth in capability, and a stable applications period. Adaptive, evolving, machine learning-based molecular design and optimization methods are approaching the period of very rapid growth and their impact is already being described as potentially disruptive. This paper describes new developments in biomimetic adaptive, evolving, learning computational molecular design methods and their potential impacts in chemistry, engineering, and medicine.
Structured illumination 3D microscopy using adaptive lenses and multimode fibers
NASA Astrophysics Data System (ADS)
Czarske, Jürgen; Philipp, Katrin; Koukourakis, Nektarios
2017-06-01
Microscopic techniques with high spatial and temporal resolution are required for in vivo studying biological cells and tissues. Adaptive lenses exhibit strong potential for fast motion-free axial scanning. However, they also lead to a degradation of the achievable resolution because of aberrations. This hurdle can be overcome by digital optical technologies. We present a novel High-and-Low-frequency (HiLo) 3D-microscope using structured illumination and an adaptive lens. Uniform illumination is used to obtain optical sectioning for the high-frequency (Hi) components of the image, and nonuniform illumination is needed to obtain optical sectioning for the low-frequency (Lo) components of the image. Nonuniform illumination is provided by a multimode fiber. It ensures robustness against optical aberrations of the adaptive lens. The depth-of-field of our microscope can be adjusted a-posteriori by computational optics. It enables to create flexible scans, which compensate for irregular axial measurement positions. The adaptive HiLo 3D-microscope provides an axial scanning range of 1 mm with an axial resolution of about 4 microns and sub-micron lateral resolution over the full scanning range. In result, volumetric measurements with high temporal and spatial resolution are provided. Demonstration measurements of zebrafish embryos with reporter gene-driven fluorescence in the thyroid gland are presented.
Biomimetic molecular design tools that learn, evolve, and adapt
2017-01-01
A dominant hallmark of living systems is their ability to adapt to changes in the environment by learning and evolving. Nature does this so superbly that intensive research efforts are now attempting to mimic biological processes. Initially this biomimicry involved developing synthetic methods to generate complex bioactive natural products. Recent work is attempting to understand how molecular machines operate so their principles can be copied, and learning how to employ biomimetic evolution and learning methods to solve complex problems in science, medicine and engineering. Automation, robotics, artificial intelligence, and evolutionary algorithms are now converging to generate what might broadly be called in silico-based adaptive evolution of materials. These methods are being applied to organic chemistry to systematize reactions, create synthesis robots to carry out unit operations, and to devise closed loop flow self-optimizing chemical synthesis systems. Most scientific innovations and technologies pass through the well-known “S curve”, with slow beginning, an almost exponential growth in capability, and a stable applications period. Adaptive, evolving, machine learning-based molecular design and optimization methods are approaching the period of very rapid growth and their impact is already being described as potentially disruptive. This paper describes new developments in biomimetic adaptive, evolving, learning computational molecular design methods and their potential impacts in chemistry, engineering, and medicine. PMID:28694872
Adaptive Engine Technologies for Aviation CO2 Emissions Reduction
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Haller, William J.; Tong, Michael T.
2006-01-01
Adaptive turbine engine technologies are assessed for their potential to reduce carbon dioxide emissions from commercial air transports.Technologies including inlet, fan, and compressor flow control, compressor stall control, blade clearance control, combustion control, active bearings and enabling technologies such as active materials and wireless sensors are discussed. The method of systems assessment is described, including strengths and weaknesses of the approach. Performance benefit estimates are presented for each technology, with a summary of potential emissions reduction possible from the development of new, adaptively controlled engine components.
Prykhozhij, Sergey V; Rajan, Vinothkumar; Berman, Jason N
2016-02-01
The development of clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 technology for mainstream biotechnological use based on its discovery as an adaptive immune mechanism in bacteria has dramatically improved the ability of molecular biologists to modify genomes of model organisms. The zebrafish is highly amenable to applications of CRISPR/Cas9 for mutation generation and a variety of DNA insertions. Cas9 protein in complex with a guide RNA molecule recognizes where to cut the homologous DNA based on a short stretch of DNA termed the protospacer-adjacent motif (PAM). Rapid and efficient identification of target sites immediately preceding PAM sites, quantification of genomic occurrences of similar (off target) sites and predictions of cutting efficiency are some of the features where computational tools play critical roles in CRISPR/Cas9 applications. Given the rapid advent and development of this technology, it can be a challenge for researchers to remain up to date with all of the important technological developments in this field. We have contributed to the armamentarium of CRISPR/Cas9 bioinformatics tools and trained other researchers in the use of appropriate computational programs to develop suitable experimental strategies. Here we provide an in-depth guide on how to use CRISPR/Cas9 and other relevant computational tools at each step of a host of genome editing experimental strategies. We also provide detailed conceptual outlines of the steps involved in the design and execution of CRISPR/Cas9-based experimental strategies, such as generation of frameshift mutations, larger chromosomal deletions and inversions, homology-independent insertion of gene cassettes and homology-based knock-in of defined point mutations and larger gene constructs.
Integration of scheduling and discrete event simulation systems to improve production flow planning
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.
2016-08-01
The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.
Walking robot: A design project for undergraduate students
NASA Technical Reports Server (NTRS)
1991-01-01
The objective of the University of Maryland walking robot project was to design, analyze, assemble, and test an intelligent, mobile, and terrain-adaptive system. The robot incorporates existing technologies in novel ways. The legs emulate the walking path of a human by an innovative modification of a crank-and-rocker mechanism. The body consists of two tripod frames connected by a turning mechanism. The two sets of three legs are mounted so as to allow the robot to walk with stability in its own footsteps. The computer uses a modular hardware design and distributed processing. Dual-port RAM is used to allow communication between a supervisory personal computer and seven microcontrollers. The microcontrollers provide low-level control for the motors and relieve the processing burden on the PC.
Computer program for the automated attendance accounting system
NASA Technical Reports Server (NTRS)
Poulson, P.; Rasmusson, C.
1971-01-01
The automated attendance accounting system (AAAS) was developed under the auspices of the Space Technology Applications Program. The task is basically the adaptation of a small digital computer, coupled with specially developed pushbutton terminals located in school classrooms and offices for the purpose of taking daily attendance, maintaining complete attendance records, and producing partial and summary reports. Especially developed for high schools, the system is intended to relieve both teachers and office personnel from the time-consuming and dreary task of recording and analyzing the myriad classroom attendance data collected throughout the semester. In addition, since many school district budgets are related to student attendance, the increase in accounting accuracy is expected to augment district income. A major component of this system is the real-time AAAS software system, which is described.
2013-10-01
pmlkploba=obmloq=pbofbp= Defense Acquisition and the Case of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a...of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change 5a. CONTRACT NUMBER 5b. GRANT...findings of our study exploring what drives successful organizational adaptation in the context of technology transition and acquisition within the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laughlin, Gary L.
The International, Homeland, and Nuclear Security (IHNS) Program Management Unit (PMU) oversees a broad portfolio of Sandia’s programs in areas ranging from global nuclear security to critical asset protection. We use science and technology, innovative research, and global engagement to counter threats, reduce dangers, and respond to disasters. The PMU draws on the skills of scientists and engineers from across Sandia. Our programs focus on protecting US government installations, safeguarding nuclear weapons and materials, facilitating nonproliferation activities, securing infrastructures, countering chemical and biological dangers, and reducing the risk of terrorist threats. We conduct research in risk and threat analysis, monitoringmore » and detection, decontamination and recovery, and situational awareness. We develop technologies for verifying arms control agreements, neutralizing dangerous materials, detecting intruders, and strengthening resiliency. Our programs use Sandia’s High-Performance Computing resources for predictive modeling and simulation of interdependent systems, for modeling dynamic threats and forecasting adaptive behavior, and for enabling decision support and processing large cyber data streams. In this report, we highlight four advanced computation projects that illustrate the breadth of the IHNS mission space.« less
Closed-loop dialog model of face-to-face communication with a photo-real virtual human
NASA Astrophysics Data System (ADS)
Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás
2004-01-01
We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Perdomo-Ortiz, Alejandro
2018-07-01
Machine learning has been presented as one of the key applications for near-term quantum technologies, given its high commercial value and wide range of applicability. In this work, we introduce the quantum-assisted Helmholtz machine:a hybrid quantum–classical framework with the potential of tackling high-dimensional real-world machine learning datasets on continuous variables. Instead of using quantum computers only to assist deep learning, as previous approaches have suggested, we use deep learning to extract a low-dimensional binary representation of data, suitable for processing on relatively small quantum computers. Then, the quantum hardware and deep learning architecture work together to train an unsupervised generative model. We demonstrate this concept using 1644 quantum bits of a D-Wave 2000Q quantum device to model a sub-sampled version of the MNIST handwritten digit dataset with 16 × 16 continuous valued pixels. Although we illustrate this concept on a quantum annealer, adaptations to other quantum platforms, such as ion-trap technologies or superconducting gate-model architectures, could be explored within this flexible framework.
Intelligence Applied to Air Vehicles
NASA Technical Reports Server (NTRS)
Rosen, Robert; Gross, Anthony R.; Fletcher, L. Skip; Zornetzer, Steven (Technical Monitor)
2000-01-01
The exponential growth in information technology has provided the potential for air vehicle capabilities that were previously unavailable to mission and vehicle designers. The increasing capabilities of computer hardware and software, including new developments such as neural networks, provide a new balance of work between humans and machines. This paper will describe several NASA projects, and review results and conclusions from ground and flight investigations where vehicle intelligence was developed and applied to aeronautical and space systems. In the first example, flight results from a neural network flight control demonstration will be reviewed. Using, a highly-modified F-15 aircraft, a NASA/Dryden experimental flight test program has demonstrated how the neural network software can correctly identify and respond to changes in aircraft stability and control characteristics. Using its on-line learning capability, the neural net software would identify that something in the vehicle has changed, then reconfigure the flight control computer system to adapt to those changes. The results of the Remote Agent software project will be presented. This capability will reduce the cost of future spacecraft operations as computers become "thinking" partners along with humans. In addition, the paper will describe the objectives and plans for the autonomous airplane program and the autonomous rotorcraft project. Technologies will also be developed.
Potential of Cognitive Computing and Cognitive Systems
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2015-01-01
Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp
Importance of balanced architectures in the design of high-performance imaging systems
NASA Astrophysics Data System (ADS)
Sgro, Joseph A.; Stanton, Paul C.
1999-03-01
Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.
Technical Concept Document. Central Archive for Reusable Defense Software (CARDS)
1994-02-28
FeNbry 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive for...February 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive...accordance with the DFARS Special Works Clause Developed by: This document, developed under the Software Technology for Adaptable, Reliable Systems
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
NASA Astrophysics Data System (ADS)
Alsaadi, Fuad E.
2016-03-01
Optical wireless systems are promising candidates for next-generation indoor communication networks. Optical wireless technology offers freedom from spectrum regulations and, compared to current radio-frequency networks, higher data rates and increased security. This paper presents a fast adaptation method for multibeam angle and delay adaptation systems and a new spot-diffusing geometry, and also considers restrictions needed for complying with eye safety regulations. The fast adaptation algorithm reduces the computational load required to reconfigure the transmitter in the case of transmitter and/or receiver mobility. The beam clustering approach enables the transmitter to assign power to spots within the pixel's field of view (FOV) and increases the number of such spots. Thus, if the power per spot is restricted to comply with eye safety standards, the new approach, in which more spots are visible within the FOV of the pixel, leads to enhanced signal-to-noise ratio (SNR). Simulation results demonstrate that the techniques proposed in this paper lead to SNR improvements that enable reliable operation at data rates as high as 15 Gbit/s. These results are based on simulation and not on actual measurements or experiments.
North American Fuzzy Logic Processing Society (NAFIPS 1992), volume 1
NASA Technical Reports Server (NTRS)
Villarreal, James A. (Compiler)
1992-01-01
This document contains papers presented at the NAFIPS '92 North American Fuzzy Information Processing Society Conference. More than 75 papers were presented at this Conference, which was sponsored by NAFIPS in cooperation with NASA, the Instituto Tecnologico de Morelia, the Indian Society for Fuzzy Mathematics and Information Processing (ISFUMIP), the Instituto Tecnologico de Estudios Superiores de Monterrey (ITESM), the International Fuzzy Systems Association (IFSA), the Japan Society for Fuzzy Theory and Systems, and the Microelectronics and Computer Technology Corporation (MCC). The fuzzy set theory has led to a large number of diverse applications. Recently, interesting applications have been developed which involve the integration of fuzzy systems with adaptive processes such as neural networks and genetic algorithms. NAFIPS '92 was directed toward the advancement, commercialization, and engineering development of these technologies.
Laser Guide Star Based Astrophysics at Lick Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, C; Gavel, D.; Friedman, H.
2000-03-10
The resolution of ground-based telescopes is typically limited to {approx}1 second of arc because of the blurring effects of atmospheric turbulence. Adaptive optics (AO) technology senses and corrects for the optical distortions due to turbulence hundreds of times per second using high-speed sensors, computers, deformable mirror, and laser technology. The goal of this project is to make AO systems widely useful astronomical tools providing resolutions up to an order of magnitude better than current, ground-based telescopes. Astronomers at the University of California Lick Observatory at Mt. Hamilton now routinely use the LLNL developed AO system for high resolution imaging ofmore » astrophysical objects. We report here on the instrument development progress and on the science observations made with this system during this 3-year ERI project.« less
North American Fuzzy Logic Processing Society (NAFIPS 1992), volume 2
NASA Technical Reports Server (NTRS)
Villarreal, James A. (Compiler)
1992-01-01
This document contains papers presented at the NAFIPS '92 North American Fuzzy Information Processing Society Conference. More than 75 papers were presented at this Conference, which was sponsored by NAFIPS in cooperation with NASA, the Instituto Tecnologico de Morelia, the Indian Society for Fuzzy Mathematics and Information Processing (ISFUMIP), the Instituto Tecnologico de Estudios Superiores de Monterrey (ITESM), the International Fuzzy Systems Association (IFSA), the Japan Society for Fuzzy Theory and Systems, and the Microelectronics and Computer Technology Corporation (MCC). The fuzzy set theory has led to a large number of diverse applications. Recently, interesting applications have been developed which involve the integration of fuzzy systems with adaptive processes such a neural networks and genetic algorithms. NAFIPS '92 was directed toward the advancement, commercialization, and engineering development of these technologies.
Practical skills of the future innovator
NASA Astrophysics Data System (ADS)
Kaurov, Vitaliy
2015-03-01
Physics graduates face and often are disoriented by the complex and turbulent world of startups, incubators, emergent technologies, big data, social network engineering, and so on. In order to build the curricula that foster the skills necessary to navigate this world, we will look at the experiences at the Wolfram Science Summer School that gathers annually international students for already more than a decade. We will look at the examples of projects and see the development of such skills as innovative thinking, data mining, machine learning, cloud technologies, device connectivity and the Internet of things, network analytics, geo-information systems, formalized computable knowledge, and the adjacent applied research skills from graph theory to image processing and beyond. This should give solid ideas to educators who will build standard curricula adapted for innovation and entrepreneurship education.
Active assistance technology for health-related behavior change: an interdisciplinary review.
Kennedy, Catriona M; Powell, John; Payne, Thomas H; Ainsworth, John; Boyd, Alan; Buchan, Iain
2012-06-14
Information technology can help individuals to change their health behaviors. This is due to its potential for dynamic and unbiased information processing enabling users to monitor their own progress and be informed about risks and opportunities specific to evolving contexts and motivations. However, in many behavior change interventions, information technology is underused by treating it as a passive medium focused on efficient transmission of information and a positive user experience. To conduct an interdisciplinary literature review to determine the extent to which the active technological capabilities of dynamic and adaptive information processing are being applied in behavior change interventions and to identify their role in these interventions. We defined key categories of active technology such as semantic information processing, pattern recognition, and adaptation. We conducted the literature search using keywords derived from the categories and included studies that indicated a significant role for an active technology in health-related behavior change. In the data extraction, we looked specifically for the following technology roles: (1) dynamic adaptive tailoring of messages depending on context, (2) interactive education, (3) support for client self-monitoring of behavior change progress, and (4) novel ways in which interventions are grounded in behavior change theories using active technology. The search returned 228 potentially relevant articles, of which 41 satisfied the inclusion criteria. We found that significant research was focused on dialog systems, embodied conversational agents, and activity recognition. The most covered health topic was physical activity. The majority of the studies were early-stage research. Only 6 were randomized controlled trials, of which 4 were positive for behavior change and 5 were positive for acceptability. Empathy and relational behavior were significant research themes in dialog systems for behavior change, with many pilot studies showing a preference for those features. We found few studies that focused on interactive education (3 studies) and self-monitoring (2 studies). Some recent research is emerging in dynamic tailoring (15 studies) and theoretically grounded ontologies for automated semantic processing (4 studies). The potential capabilities and risks of active assistance technologies are not being fully explored in most current behavior change research. Designers of health behavior interventions need to consider the relevant informatics methods and algorithms more fully. There is also a need to analyze the possibilities that can result from interaction between different technology components. This requires deep interdisciplinary collaboration, for example, between health psychology, computer science, health informatics, cognitive science, and educational methodology.
Active Assistance Technology for Health-Related Behavior Change: An Interdisciplinary Review
Kennedy, Catriona M; Powell, John; Payne, Thomas H; Ainsworth, John; Boyd, Alan
2012-01-01
Background Information technology can help individuals to change their health behaviors. This is due to its potential for dynamic and unbiased information processing enabling users to monitor their own progress and be informed about risks and opportunities specific to evolving contexts and motivations. However, in many behavior change interventions, information technology is underused by treating it as a passive medium focused on efficient transmission of information and a positive user experience. Objective To conduct an interdisciplinary literature review to determine the extent to which the active technological capabilities of dynamic and adaptive information processing are being applied in behavior change interventions and to identify their role in these interventions. Methods We defined key categories of active technology such as semantic information processing, pattern recognition, and adaptation. We conducted the literature search using keywords derived from the categories and included studies that indicated a significant role for an active technology in health-related behavior change. In the data extraction, we looked specifically for the following technology roles: (1) dynamic adaptive tailoring of messages depending on context, (2) interactive education, (3) support for client self-monitoring of behavior change progress, and (4) novel ways in which interventions are grounded in behavior change theories using active technology. Results The search returned 228 potentially relevant articles, of which 41 satisfied the inclusion criteria. We found that significant research was focused on dialog systems, embodied conversational agents, and activity recognition. The most covered health topic was physical activity. The majority of the studies were early-stage research. Only 6 were randomized controlled trials, of which 4 were positive for behavior change and 5 were positive for acceptability. Empathy and relational behavior were significant research themes in dialog systems for behavior change, with many pilot studies showing a preference for those features. We found few studies that focused on interactive education (3 studies) and self-monitoring (2 studies). Some recent research is emerging in dynamic tailoring (15 studies) and theoretically grounded ontologies for automated semantic processing (4 studies). Conclusions The potential capabilities and risks of active assistance technologies are not being fully explored in most current behavior change research. Designers of health behavior interventions need to consider the relevant informatics methods and algorithms more fully. There is also a need to analyze the possibilities that can result from interaction between different technology components. This requires deep interdisciplinary collaboration, for example, between health psychology, computer science, health informatics, cognitive science, and educational methodology. PMID:22698679
Simple and Effective Algorithms: Computer-Adaptive Testing.
ERIC Educational Resources Information Center
Linacre, John Michael
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Recent advances in radiation oncology
Garibaldi, Cristina; Jereczek-Fossa, Barbara Alicja; Marvaso, Giulia; Dicuonzo, Samantha; Rojas, Damaris Patricia; Cattani, Federica; Starzyńska, Anna; Ciardo, Delia; Surgo, Alessia; Leonardi, Maria Cristina; Ricotti, Rosalinda
2017-01-01
Radiotherapy (RT) is very much a technology-driven treatment modality in the management of cancer. RT techniques have changed significantly over the past few decades, thanks to improvements in engineering and computing. We aim to highlight the recent developments in radiation oncology, focusing on the technological and biological advances. We will present state-of-the-art treatment techniques, employing photon beams, such as intensity-modulated RT, volumetric-modulated arc therapy, stereotactic body RT and adaptive RT, which make possible a highly tailored dose distribution with maximum normal tissue sparing. We will analyse all the steps involved in the treatment: imaging, delineation of the tumour and organs at risk, treatment planning and finally image-guidance for accurate tumour localisation before and during treatment delivery. Particular attention will be given to the crucial role that imaging plays throughout the entire process. In the case of adaptive RT, the precise identification of target volumes as well as the monitoring of tumour response/modification during the course of treatment is mainly based on multimodality imaging that integrates morphological, functional and metabolic information. Moreover, real-time imaging of the tumour is essential in breathing adaptive techniques to compensate for tumour motion due to respiration. Brief reference will be made to the recent spread of particle beam therapy, in particular to the use of protons, but also to the yet limited experience of using heavy particles such as carbon ions. Finally, we will analyse the latest biological advances in tumour targeting. Indeed, the effectiveness of RT has been improved not only by technological developments but also through the integration of radiobiological knowledge to produce more efficient and personalised treatment strategies. PMID:29225692
SEQ-POINTER: Next generation, planetary spacecraft remote sensing science observation design tool
NASA Technical Reports Server (NTRS)
Boyer, Jeffrey S.
1994-01-01
Since Mariner, NASA-JPL planetary missions have been supported by ground software to plan and design remote sensing science observations. The software used by the science and sequence designers to plan and design observations has evolved with mission and technological advances. The original program, PEGASIS (Mariners 4, 6, and 7), was re-engineered as POGASIS (Mariner 9, Viking, and Mariner 10), and again later as POINTER (Voyager and Galileo). Each of these programs were developed under technological, political, and fiscal constraints which limited their adaptability to other missions and spacecraft designs. Implementation of a multi-mission tool, SEQ POINTER, under the auspices of the JPL Multimission Operations Systems Office (MOSO) is in progress. This version has been designed to address the limitations experienced on previous versions as they were being adapted to a new mission and spacecraft. The tool has been modularly designed with subroutine interface structures to support interchangeable celestial body and spacecraft definition models. The computational and graphics modules have also been designed to interface with data collected from previous spacecraft, or on-going observations, which describe the surface of each target body. These enhancements make SEQ POINTER a candidate for low-cost mission usage, when a remote sensing science observation design capability is required. The current and planned capabilities of the tool will be discussed. The presentation will also include a 5-10 minute video presentation demonstrating the capabilities of a proto-Cassini Project version that was adapted to test the tool. The work described in this abstract was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.
SEQ-POINTER: Next generation, planetary spacecraft remote sensing science observation design tool
NASA Astrophysics Data System (ADS)
Boyer, Jeffrey S.
1994-11-01
Since Mariner, NASA-JPL planetary missions have been supported by ground software to plan and design remote sensing science observations. The software used by the science and sequence designers to plan and design observations has evolved with mission and technological advances. The original program, PEGASIS (Mariners 4, 6, and 7), was re-engineered as POGASIS (Mariner 9, Viking, and Mariner 10), and again later as POINTER (Voyager and Galileo). Each of these programs were developed under technological, political, and fiscal constraints which limited their adaptability to other missions and spacecraft designs. Implementation of a multi-mission tool, SEQ POINTER, under the auspices of the JPL Multimission Operations Systems Office (MOSO) is in progress. This version has been designed to address the limitations experienced on previous versions as they were being adapted to a new mission and spacecraft. The tool has been modularly designed with subroutine interface structures to support interchangeable celestial body and spacecraft definition models. The computational and graphics modules have also been designed to interface with data collected from previous spacecraft, or on-going observations, which describe the surface of each target body. These enhancements make SEQ POINTER a candidate for low-cost mission usage, when a remote sensing science observation design capability is required. The current and planned capabilities of the tool will be discussed. The presentation will also include a 5-10 minute video presentation demonstrating the capabilities of a proto-Cassini Project version that was adapted to test the tool. The work described in this abstract was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.
Computer Based Porosity Design by Multi Phase Topology Optimization
NASA Astrophysics Data System (ADS)
Burblies, Andreas; Busse, Matthias
2008-02-01
A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.
Advances in Orion's On-Orbit Guidance and Targeting System Architecture
NASA Technical Reports Server (NTRS)
Scarritt, Sara K.; Fill, Thomas; Robinson, Shane
2015-01-01
NASA's manned spaceflight programs have a rich history of advancing onboard guidance and targeting technology. In order to support future missions, the guidance and targeting architecture for the Orion Multi-Purpose Crew Vehicle must be able to operate in complete autonomy, without any support from the ground. Orion's guidance and targeting system must be sufficiently flexible to easily adapt to a wide array of undecided future missions, yet also not cause an undue computational burden on the flight computer. This presents a unique design challenge from the perspective of both algorithm development and system architecture construction. The present work shows how Orion's guidance and targeting system addresses these challenges. On the algorithm side, the system advances the state-of-the-art by: (1) steering burns with a simple closed-loop guidance strategy based on Shuttle heritage, and (2) planning maneuvers with a cutting-edge two-level targeting routine. These algorithms are then placed into an architecture designed to leverage the advantages of each and ensure that they function in concert with one another. The resulting system is characterized by modularity and simplicity. As such, it is adaptable to the on-orbit phases of any future mission that Orion may attempt.
20-GFLOPS QR processor on a Xilinx Virtex-E FPGA
NASA Astrophysics Data System (ADS)
Walke, Richard L.; Smith, Robert W. M.; Lightbody, Gaye
2000-11-01
Adaptive beamforming can play an important role in sensor array systems in countering directional interference. In high-sample rate systems, such as radar and comms, the calculation of adaptive weights is a very computational task that requires highly parallel solutions. For systems where low power consumption and volume are important the only viable implementation is as an Application Specific Integrated Circuit (ASIC). However, the rapid advancement of Field Programmable Gate Array (FPGA) technology is enabling highly credible re-programmable solutions. In this paper we present the implementation of a scalable linear array processor for weight calculation using QR decomposition. We employ floating-point arithmetic with mantissa size optimized to the target application to minimize component size, and implement them as relationally placed macros (RPMs) on Xilinx Virtex FPGAs to achieve predictable dense layout and high-speed operation. We present results that show that 20GFLOPS of sustained computation on a single XCV3200E-8 Virtex-E FPGA is possible. We also describe the parameterized implementation of the floating-point operators and QR-processor, and the design methodology that enables us to rapidly generate complex FPGA implementations using the industry standard hardware description language VHDL.
Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele
QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less
LeaRN: A Collaborative Learning-Research Network for a WLCG Tier-3 Centre
NASA Astrophysics Data System (ADS)
Pérez Calle, Elio
2011-12-01
The Department of Modern Physics of the University of Science and Technology of China is hosting a Tier-3 centre for the ATLAS experiment. A interdisciplinary team of researchers, engineers and students are devoted to the task of receiving, storing and analysing the scientific data produced by the LHC. In order to achieve the highest performance and to develop a knowledge base shared by all members of the team, the research activities and their coordination are being supported by an array of computing systems. These systems have been designed to foster communication, collaboration and coordination among the members of the team, both face-to-face and remotely, and both in synchronous and asynchronous ways. The result is a collaborative learning-research network whose main objectives are awareness (to get shared knowledge about other's activities and therefore obtain synergies), articulation (to allow a project to be divided, work units to be assigned and then reintegrated) and adaptation (to adapt information technologies to the needs of the group). The main technologies involved are Communication Tools such as web publishing, revision control and wikis, Conferencing Tools such as forums, instant messaging and video conferencing and Coordination Tools, such as time management, project management and social networks. The software toolkit has been deployed by the members of the team and it has been based on free and open source software.
Comparing Computer-Adaptive and Curriculum-Based Measurement Methods of Assessment
ERIC Educational Resources Information Center
Shapiro, Edward S.; Gebhardt, Sarah N.
2012-01-01
This article reported the concurrent, predictive, and diagnostic accuracy of a computer-adaptive test (CAT) and curriculum-based measurements (CBM; both computation and concepts/application measures) for universal screening in mathematics among students in first through fourth grade. Correlational analyses indicated moderate to strong…
BUDS Candidate Success Through RTC: First Watch Results
2007-01-01
22 NCAPS ...motivation. 22 N P R S T 21 NCAPS (Navy Computer Adaptive Computer Scales) NCAPS • Achievement • Stress Tolerance • Self Reliance • Leadership...military population. The Navy Computer Adaptive Personality Scales ( NCAPS ), however, was developed specifically to predict success across all Navy
Pilot Evaluation of Adaptive Control in Motion-Based Flight Simulator
NASA Technical Reports Server (NTRS)
Kaneshige, John T.; Campbell, Stefan Forrest
2009-01-01
The objective of this work is to assess the strengths, weaknesses, and robustness characteristics of several MRAC (Model-Reference Adaptive Control) based adaptive control technologies garnering interest from the community as a whole. To facilitate this, a control study using piloted and unpiloted simulations to evaluate sensitivities and handling qualities was conducted. The adaptive control technologies under consideration were ALR (Adaptive Loop Recovery), BLS (Bounded Linear Stability), Hybrid Adaptive Control, L1, OCM (Optimal Control Modification), PMRAC (Predictor-based MRAC), and traditional MRAC
ERIC Educational Resources Information Center
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua
2012-01-01
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
1991-06-01
Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent
AstroGrid-D: Grid technology for astronomical science
NASA Astrophysics Data System (ADS)
Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve
2011-02-01
We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
Tactile Radar: experimenting a computer game with visually disabled.
Kastrup, Virgínia; Cassinelli, Alvaro; Quérette, Paulo; Bergstrom, Niklas; Sampaio, Eliana
2017-09-18
Visually disabled people increasingly use computers in everyday life, thanks to novel assistive technologies better tailored to their cognitive functioning. Like sighted people, many are interested in computer games - videogames and audio-games. Tactile-games are beginning to emerge. The Tactile Radar is a device through which a visually disabled person is able to detect distal obstacles. In this study, it is connected to a computer running a tactile-game. The game consists in finding and collecting randomly arranged coins in a virtual room. The study was conducted with nine congenital blind people including both sexes, aged 20-64 years old. Complementary methods of first and third person were used: the debriefing interview and the quasi-experimental design. The results indicate that the Tactile Radar is suitable for the creation of computer games specifically tailored for visually disabled people. Furthermore, the device seems capable of eliciting a powerful immersive experience. Methodologically speaking, this research contributes to the consolidation and development of first and third person complementary methods, particularly useful in disabled people research field, including the evaluation by users of the Tactile Radar effectiveness in a virtual reality context. Implications for rehabilitation Despite the growing interest in virtual games for visually disabled people, they still find barriers to access such games. Through the development of assistive technologies such as the Tactile Radar, applied in virtual games, we can create new opportunities for leisure, socialization and education for visually disabled people. The results of our study indicate that the Tactile Radar is adapted to the creation of video games for visually disabled people, providing a playful interaction with the players.
NASA Astrophysics Data System (ADS)
Narayanan, M.
2004-12-01
Catherine Palomba and Trudy Banta offer the following definition of assessment, adapted from one provided by Marches in 1987. Assessment in the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. (Palomba and Banta 1999). It is widely recognized that sophisticated computing technologies are becoming a key element in today's classroom instructional techniques. Regardless, the Professor must be held responsible for creating an instructional environment in which the technology actually supplements learning outcomes of the students. Almost all academic disciplines have found a niche for computer-based instruction in their respective professional domain. In many cases, it is viewed as an essential and integral part of the educational process. Educational institutions are committing substantial resources to the establishment of dedicated technology-based laboratories, so that they will be able to accommodate and fulfill students' desire to master certain of these specific skills. This type of technology-based instruction may raise some fundamental questions about the core competencies of the student learner. Some of the most important questions are : 1. Is the utilization of these fast high-powered computers and user-friendly software programs creating a totally non-challenging instructional environment for the student learner ? 2. Can technology itself all too easily overshadow the learning outcomes intended ? 3. Are the educational institutions simply training students how to use technology rather than educating them in the appropriate field ? 4. Are we still teaching content-driven courses and analysis oriented subject matter ? 5. Are these sophisticated modern era technologies contributing to a decline in the Critical Thinking Capabilities of the 21st century technology-savvy students ? The author tries to focus on technology as a tool and not on the technology itself. He further argues that students must demonstrate that they have the have the ability to think critically before they make an attempt to use technology in a chosen application-specific environment. The author further argues that training-based instruction has a very narrow focus that puts modern technology at the forefront of the learning enterprise system. The author promotes education-oriented strategies to provide the students with a broader perspective of the subject matter. The author is also of the opinion that students entering the workplace should clearly understand the context in which modern technologies are influencing the productive outcomes of the industrialized world. References : Marchese, T. J. (1987). Third Down, Ten Years to go. AAHE Bulletin, Vol. 40, pages 3-8. Marchese, T. J. (1994). Assessment, Quality and Undergraduate Improvement. Assessment Update, Vol. 6, No. 3. pages 1-14. Montagu, A. S. (2001). High-technology instruction: A framework for teaching computer-based technologies. Journal on Excellence in College Teaching, 12 (1), 109-128. Palomba, Catherine A. and Banta, Trudy W.(1999). Assessment Essentials :Planning, Implementing and Improving Assessment in Higher Education. San Francisco : Jossey Bass Publishers.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koniges, A.E.; Craddock, G.G.; Schnack, D.D.
The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less
Lehrer, Nicole; Chen, Yinpeng; Duff, Margaret; L Wolf, Steven; Rikakis, Thanassis
2011-09-08
Few existing interactive rehabilitation systems can effectively communicate multiple aspects of movement performance simultaneously, in a manner that appropriately adapts across various training scenarios. In order to address the need for such systems within stroke rehabilitation training, a unified approach for designing interactive systems for upper limb rehabilitation of stroke survivors has been developed and applied for the implementation of an Adaptive Mixed Reality Rehabilitation (AMRR) System. The AMRR system provides computational evaluation and multimedia feedback for the upper limb rehabilitation of stroke survivors. A participant's movements are tracked by motion capture technology and evaluated by computational means. The resulting data are used to generate interactive media-based feedback that communicates to the participant detailed, intuitive evaluations of his performance. This article describes how the AMRR system's interactive feedback is designed to address specific movement challenges faced by stroke survivors. Multimedia examples are provided to illustrate each feedback component. Supportive data are provided for three participants of varying impairment levels to demonstrate the system's ability to train both targeted and integrated aspects of movement. The AMRR system supports training of multiple movement aspects together or in isolation, within adaptable sequences, through cohesive feedback that is based on formalized compositional design principles. From preliminary analysis of the data, we infer that the system's ability to train multiple foci together or in isolation in adaptable sequences, utilizing appropriately designed feedback, can lead to functional improvement. The evaluation and feedback frameworks established within the AMRR system will be applied to the development of a novel home-based system to provide an engaging yet low-cost extension of training for longer periods of time.
2011-01-01
Background Few existing interactive rehabilitation systems can effectively communicate multiple aspects of movement performance simultaneously, in a manner that appropriately adapts across various training scenarios. In order to address the need for such systems within stroke rehabilitation training, a unified approach for designing interactive systems for upper limb rehabilitation of stroke survivors has been developed and applied for the implementation of an Adaptive Mixed Reality Rehabilitation (AMRR) System. Results The AMRR system provides computational evaluation and multimedia feedback for the upper limb rehabilitation of stroke survivors. A participant's movements are tracked by motion capture technology and evaluated by computational means. The resulting data are used to generate interactive media-based feedback that communicates to the participant detailed, intuitive evaluations of his performance. This article describes how the AMRR system's interactive feedback is designed to address specific movement challenges faced by stroke survivors. Multimedia examples are provided to illustrate each feedback component. Supportive data are provided for three participants of varying impairment levels to demonstrate the system's ability to train both targeted and integrated aspects of movement. Conclusions The AMRR system supports training of multiple movement aspects together or in isolation, within adaptable sequences, through cohesive feedback that is based on formalized compositional design principles. From preliminary analysis of the data, we infer that the system's ability to train multiple foci together or in isolation in adaptable sequences, utilizing appropriately designed feedback, can lead to functional improvement. The evaluation and feedback frameworks established within the AMRR system will be applied to the development of a novel home-based system to provide an engaging yet low-cost extension of training for longer periods of time. PMID:21899779
Crowe, Barbara J; Rio, Robin
2004-01-01
This article reviews the use of technology in music therapy practice and research for the purpose of providing music therapy educators and clinicians with specific and accurate accounts of the types and benefits of technology being used in various settings. Additionally, this knowledge will help universities comply with National Association of Schools of Music requirements and help to standardize the education and training of music therapists in this rapidly changing area. Information was gathered through a literature review of music therapy and related professional journals and a wide variety of books and personal communications. More data were gathered in a survey requesting information on current use of technology in education and practice. This solicitation was sent to all American Music Therapy Association approved universities and clinical training directors. Technology applications in music therapy are organized according to the following categories: (a) adapted musical instruments, (b) recording technology, (c) electric/electronic musical instruments, (d) computer applications, (e) medical technology, (f) assistive technology for the disabled, and (g) technology-based music/sound healing practices. The literature reviewed covers 177 books and articles from a span of almost 40 years. Recommendations are made for incorporating technology into music therapy course work and for review and revision of AMTA competencies. The need for an all-encompassing clinical survey of the use of technology in current music therapy practice is also identified.
NASA Astrophysics Data System (ADS)
Rousis, Damon A.
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
Fayers, Peter M
2007-01-01
We review the papers presented at the NCI/DIA conference, to identify areas of controversy and uncertainty, and to highlight those aspects of item response theory (IRT) and computer adaptive testing (CAT) that require theoretical or empirical research in order to justify their application to patient reported outcomes (PROs). IRT and CAT offer exciting potential for the development of a new generation of PRO instruments. However, most of the research into these techniques has been in non-healthcare settings, notably in education. Educational tests are very different from PRO instruments, and consequently problematic issues arise when adapting IRT and CAT to healthcare research. Clinical scales differ appreciably from educational tests, and symptoms have characteristics distinctly different from examination questions. This affects the transferring of IRT technology. Particular areas of concern when applying IRT to PROs include inadequate software, difficulties in selecting models and communicating results, insufficient testing of local independence and other assumptions, and a need of guidelines for estimating sample size requirements. Similar concerns apply to differential item functioning (DIF), which is an important application of IRT. Multidimensional IRT is likely to be advantageous only for closely related PRO dimensions. Although IRT and CAT provide appreciable potential benefits, there is a need for circumspection. Not all PRO scales are necessarily appropriate targets for this methodology. Traditional psychometric methods, and especially qualitative methods, continue to have an important role alongside IRT. Research should be funded to address the specific concerns that have been identified.
gCUP: rapid GPU-based HIV-1 co-receptor usage prediction for next-generation sequencing.
Olejnik, Michael; Steuwer, Michel; Gorlatch, Sergei; Heider, Dominik
2014-11-15
Next-generation sequencing (NGS) has a large potential in HIV diagnostics, and genotypic prediction models have been developed and successfully tested in the recent years. However, albeit being highly accurate, these computational models lack computational efficiency to reach their full potential. In this study, we demonstrate the use of graphics processing units (GPUs) in combination with a computational prediction model for HIV tropism. Our new model named gCUP, parallelized and optimized for GPU, is highly accurate and can classify >175 000 sequences per second on an NVIDIA GeForce GTX 460. The computational efficiency of our new model is the next step to enable NGS technologies to reach clinical significance in HIV diagnostics. Moreover, our approach is not limited to HIV tropism prediction, but can also be easily adapted to other settings, e.g. drug resistance prediction. The source code can be downloaded at http://www.heiderlab.de d.heider@wz-straubing.de. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Near Real-Time Image Reconstruction
NASA Astrophysics Data System (ADS)
Denker, C.; Yang, G.; Wang, H.
2001-08-01
In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Blanson Henkemans, O. A.; Rogers, W. A.; Fisk, A. D.; Neerincx, M. A.; Lindenberg, J.; van der Mast, C. A. P. G.
2014-01-01
Summary Objectives We developed an adaptive computer assistant for the supervision of diabetics’ self-care, to support limiting illness and need for acute treatment, and improve health literacy. This assistant monitors self-care activities logged in the patient’s electronic diary. Accordingly, it provides context-aware feedback. The objective was to evaluate whether older adults in general can make use of the computer assistant and to compare an adaptive computer assistant with a fixed one, concerning its usability and contribution to health literacy. Methods We conducted a laboratory experiment in the Georgia Tech Aware Home wherein 28 older adults participated in a usability evaluation of the computer assistant, while engaged in scenarios reflecting normal and health-critical situations. We evaluated the assistant on effectiveness, efficiency, satisfaction, and educational value. Finally, we studied the moderating effects of the subjects’ personal characteristics. Results Logging self-care tasks and receiving feedback from the computer assistant enhanced the subjects’ knowledge of diabetes. The adaptive assistant was more effective in dealing with normal and health-critical situations, and, generally, it led to more time efficiency. Subjects’ personal characteristics had substantial effects on the effectiveness and efficiency of the two computer assistants. Conclusions Older adults were able to use the adaptive computer assistant. In addition, it had a positive effect on the development of health literacy. The assistant has the potential to support older diabetics’ self care while maintaining quality of life. PMID:18213433
NASA Astrophysics Data System (ADS)
Hao, Qiushi; Zhang, Xin; Wang, Yan; Shen, Yi; Makis, Viliam
2018-07-01
Acoustic emission (AE) technology is sensitive to subliminal rail defects, however strong wheel-rail contact rolling noise under high-speed condition has gravely impeded detecting of rail defects using traditional denoising methods. In this context, the paper develops an adaptive detection method for rail cracks, which combines multiresolution analysis with an improved adaptive line enhancer (ALE). To obtain elaborate multiresolution information of transient crack signals with low computational cost, lifting scheme-based undecimated wavelet packet transform is adopted. In order to feature the impulsive property of crack signals, a Shannon entropy-improved ALE is proposed as a signal enhancing approach, where Shannon entropy is introduced to improve the cost function. Then a rail defect detection plan based on the proposed method for high-speed condition is put forward. From theoretical analysis and experimental verification, it is demonstrated that the proposed method has superior performance in enhancing the rail defect AE signal and reducing the strong background noise, offering an effective multiresolution approach for rail defect detection under high-speed and strong-noise condition.
2-dimensional implicit hydrodynamics on adaptive grids
NASA Astrophysics Data System (ADS)
Stökl, A.; Dorfi, E. A.
2007-12-01
We present a numerical scheme for two-dimensional hydrodynamics computations using a 2D adaptive grid together with an implicit discretization. The combination of these techniques has offered favorable numerical properties applicable to a variety of one-dimensional astrophysical problems which motivated us to generalize this approach for two-dimensional applications. Due to the different topological nature of 2D grids compared to 1D problems, grid adaptivity has to avoid severe grid distortions which necessitates additional smoothing parameters to be included into the formulation of a 2D adaptive grid. The concept of adaptivity is described in detail and several test computations demonstrate the effectivity of smoothing. The coupled solution of this grid equation together with the equations of hydrodynamics is illustrated by computation of a 2D shock tube problem.
NASA Technical Reports Server (NTRS)
Hornstein, Rhoda Shaller; Willoughby, John K.
1991-01-01
Traditional practice of systems engineering management assumes requirements can be precisely determined and unambiguously defined prior to system design and implementation; practice further assumes requirements are held static during implementation. Human-computer decision support systems for service planning and scheduling applications do not conform well to these assumptions. Adaptation to the traditional practice of systems engineering management are required. Basic technology exists to support these adaptations. Additional innovations must be encouraged and nutured. Continued partnership between the programmatic and technical perspective assures proper balance of the impossible with the possible. Past problems have the following origins: not recognizing the unusual and perverse nature of the requirements for planning and scheduling; not recognizing the best starting point assumptions for the design; not understanding the type of system that being built; and not understanding the design consequences of the operations concept selected.
Compact MEMS-based adaptive optics: optical coherence tomography for clinical use
NASA Astrophysics Data System (ADS)
Chen, Diana C.; Olivier, Scot S.; Jones, Steven M.; Zawadzki, Robert J.; Evans, Julia W.; Choi, Stacey S.; Werner, John S.
2008-02-01
We describe a compact MEMS-based adaptive optics (AO) optical coherence tomography (OCT) system with improved AO performance and ease of clinical use. A typical AO system consists of a Shack-Hartmann wavefront sensor and a deformable mirror that measures and corrects the ocular and system aberrations. Because of limitations on current deformable mirror technologies, the amount of real-time ocular-aberration compensation is restricted and small in previous AO-OCT instruments. In this instrument, we incorporate an optical apparatus to correct the spectacle aberrations of the patients such as myopia, hyperopia and astigmatism. This eliminates the tedious process of using trial lenses in clinical imaging. Different amount of spectacle aberration compensation was achieved by motorized stages and automated with the AO computer for ease of clinical use. In addition, the compact AO-OCT was optimized to have minimum system aberrations to reduce AO registration errors and improve AO performance.
Computer simulations and real-time control of ELT AO systems using graphical processing units
NASA Astrophysics Data System (ADS)
Wang, Lianqi; Ellerbroek, Brent
2012-07-01
The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Melissa R
2013-10-01
The following pages represent the status of policy regarding adaptation of the electric grid to climate change and proposed directions for new policy development. While strides are being made to understand the current climate and to predict hazards it may present to human systems, both the science and the policy remain at present in an analytical state. The policy proposed in this document involves first continued computational modeling of outcomes which will produce a portfolio of options to be considered in light of specific region-related risks. It is proposed that the modeling continue not only until reasonable policy at variousmore » levels of jurisdiction can be derived from its outcome but also on a continuing basis so that as improvements in the understanding of the state and trajectory of climate science along with advancements in technology arise, they can be incorporated into an appropriate and evolving policy.« less
Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing
NASA Technical Reports Server (NTRS)
Sterling, T. L.; Zima, H. P.
2002-01-01
Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.
Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko
2013-06-18
Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.
An object-oriented, technology-adaptive information model
NASA Technical Reports Server (NTRS)
Anyiwo, Joshua C.
1995-01-01
The primary objective was to develop a computer information system for effectively presenting NASA's technologies to American industries, for appropriate commercialization. To this end a comprehensive information management model, applicable to a wide variety of situations, and immune to computer software/hardware technological gyrations, was developed. The model consists of four main elements: a DATA_STORE, a data PRODUCER/UPDATER_CLIENT and a data PRESENTATION_CLIENT, anchored to a central object-oriented SERVER engine. This server engine facilitates exchanges among the other model elements and safeguards the integrity of the DATA_STORE element. It is designed to support new technologies, as they become available, such as Object Linking and Embedding (OLE), on-demand audio-video data streaming with compression (such as is required for video conferencing), Worldwide Web (WWW) and other information services and browsing, fax-back data requests, presentation of information on CD-ROM, and regular in-house database management, regardless of the data model in place. The four components of this information model interact through a system of intelligent message agents which are customized to specific information exchange needs. This model is at the leading edge of modern information management models. It is independent of technological changes and can be implemented in a variety of ways to meet the specific needs of any communications situation. This summer a partial implementation of the model has been achieved. The structure of the DATA_STORE has been fully specified and successfully tested using Microsoft's FoxPro 2.6 database management system. Data PRODUCER/UPDATER and PRESENTATION architectures have been developed and also successfully implemented in FoxPro; and work has started on a full implementation of the SERVER engine. The model has also been successfully applied to a CD-ROM presentation of NASA's technologies in support of Langley Research Center's TAG efforts.
Adapting construction staking to modern technology : final report.
DOT National Transportation Integrated Search
2017-08-01
This report summarizes the tasks and findings of the ICT Project R27-163, Adapting Construction Staking to Modern Technology, which aims to develop written procedures for the use of modern technologies (such as GPS and civil information modeling) in ...
On the use of interaction error potentials for adaptive brain computer interfaces.
Llera, A; van Gerven, M A J; Gómez, V; Jensen, O; Kappen, H J
2011-12-01
We propose an adaptive classification method for the Brain Computer Interfaces (BCI) which uses Interaction Error Potentials (IErrPs) as a reinforcement signal and adapts the classifier parameters when an error is detected. We analyze the quality of the proposed approach in relation to the misclassification of the IErrPs. In addition we compare static versus adaptive classification performance using artificial and MEG data. We show that the proposed adaptive framework significantly improves the static classification methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hayden, Eric J
2016-08-15
RNA molecules provide a realistic but tractable model of a genotype to phenotype relationship. This relationship has been extensively investigated computationally using secondary structure prediction algorithms. Enzymatic RNA molecules, or ribozymes, offer access to genotypic and phenotypic information in the laboratory. Advancements in high-throughput sequencing technologies have enabled the analysis of sequences in the lab that now rivals what can be accomplished computationally. This has motivated a resurgence of in vitro selection experiments and opened new doors for the analysis of the distribution of RNA functions in genotype space. A body of computational experiments has investigated the persistence of specific RNA structures despite changes in the primary sequence, and how this mutational robustness can promote adaptations. This article summarizes recent approaches that were designed to investigate the role of mutational robustness during the evolution of RNA molecules in the laboratory, and presents theoretical motivations, experimental methods and approaches to data analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Viitanen, Johanna; Nieminen, Marko; Hypponen, Hannele; Laaveri, Tinja
2011-01-01
Several researchers share the concern of healthcare information systems failing to support communication and collaboration in clinical practices. The objective of this paper is to investigate the current state of computer-supported patient information exchange and associated communication between clinicians. We report findings from a national survey on Finnish physicians? experiences with their currently used clinical information systems with regard to patient information documentation, retrieval, management and exchange-related tasks. The questionnaire study with 3929 physicians indicated the main concern being cross-organisational patient information delivery. In addition, physicians argued computer usage increasingly steals time and attention from caring activities and even disturbs physician?nurse collaboration. Problems in information management were particularly emphasised among those physicians working in hospitals and wards. The survey findings indicated that collaborative applications and mobile or wireless solutions have not been widely adapted in Finnish healthcare and suggested an urgent need for adopting appropriate information and communication technology applications to support information exchange and communication between physicians, and physicians and nurses.
A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software
NASA Astrophysics Data System (ADS)
Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.
2017-10-01
Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.
Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain
Dai, Yonghui; Han, Dongmei; Dai, Weihui
2014-01-01
The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market. PMID:24782659
Anyonic braiding in optical lattices
Zhang, Chuanwei; Scarola, V. W.; Tewari, Sumanta; Das Sarma, S.
2007-01-01
Topological quantum states of matter, both Abelian and non-Abelian, are characterized by excitations whose wavefunctions undergo nontrivial statistical transformations as one excitation is moved (braided) around another. Topological quantum computation proposes to use the topological protection and the braiding statistics of a non-Abelian topological state to perform quantum computation. The enormous technological prospect of topological quantum computation provides new motivation for experimentally observing a topological state. Here, we explicitly work out a realistic experimental scheme to create and braid the Abelian topological excitations in the Kitaev model built on a tunable robust system, a cold atom optical lattice. We also demonstrate how to detect the key feature of these excitations: their braiding statistics. Observation of this statistics would directly establish the existence of anyons, quantum particles that are neither fermions nor bosons. In addition to establishing topological matter, the experimental scheme we develop here can also be adapted to a non-Abelian topological state, supported by the same Kitaev model but in a different parameter regime, to eventually build topologically protected quantum gates. PMID:18000038
Adaptation of acoustic model experiments of STM via smartphones and tablets
NASA Astrophysics Data System (ADS)
Thees, Michael; Hochberg, Katrin; Kuhn, Jochen; Aeschlimann, Martin
2017-10-01
The importance of Scanning Tunneling Microscopy (STM) in today's research and industry leads to the question of how to include such a key technology in physics education. Manfred Euler has developed an acoustic model experiment to illustrate the fundamental measuring principles based on an analogy between quantum mechanics and acoustics. Based on earlier work we applied mobile devices such as smartphones and tablets instead of using a computer to record and display the experimental data and thus converted Euler's experimental setup into a low-cost experiment that is easy to build and handle by students themselves.
Innovative Materials for Aircraft Morphing
NASA Technical Reports Server (NTRS)
Simpson, J. O.; Wise, S. A.; Bryant, R. G.; Cano, R. J.; Gates, T. S.; Hinkley, J. A.; Rogowski, R. S.; Whitley, K. S.
1997-01-01
Reported herein is an overview of the research being conducted within the Materials Division at NASA Langley Research Center on the development of smart material technologies for advanced airframe systems. The research is a part of the Aircraft Morphing Program which is a new six-year research program to develop smart components for self-adaptive airframe systems. The fundamental areas of materials research within the program are computational materials; advanced piezoelectric materials; advanced fiber optic sensing techniques; and fabrication of integrated composite structures. This paper presents a portion of the ongoing research in each of these areas of materials research.
An Investigation on Computer-Adaptive Multistage Testing Panels for Multidimensional Assessment
ERIC Educational Resources Information Center
Wang, Xinrui
2013-01-01
The computer-adaptive multistage testing (ca-MST) has been developed as an alternative to computerized adaptive testing (CAT), and been increasingly adopted in large-scale assessments. Current research and practice only focus on ca-MST panels for credentialing purposes. The ca-MST test mode, therefore, is designed to gauge a single scale. The…
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Computer Adaptive Testing, Big Data and Algorithmic Approaches to Education
ERIC Educational Resources Information Center
Thompson, Greg
2017-01-01
This article critically considers the promise of computer adaptive testing (CAT) and digital data to provide better and quicker data that will improve the quality, efficiency and effectiveness of schooling. In particular, it uses the case of the Australian NAPLAN test that will become an online, adaptive test from 2016. The article argues that…
ERIC Educational Resources Information Center
Hansen, Duncan N.; And Others
Computer simulations of three individualized adaptive instructional models (AIM) were undertaken to determine if these models function as prescribed in Air Force technical training programs. In addition, the project sought to develop a user's guide for effective understanding of adaptive models during field implementation. Successful simulations…
Adaptive designs in clinical trials.
Bowalekar, Suresh
2011-01-01
In addition to the expensive and lengthy process of developing a new medicine, the attrition rate in clinical research was on the rise, resulting in stagnation in the development of new compounds. As a consequence to this, the US Food and Drug Administration released a critical path initiative document in 2004, highlighting the need for developing innovative trial designs. One of the innovations suggested the use of adaptive designs for clinical trials. Thus, post critical path initiative, there is a growing interest in using adaptive designs for the development of pharmaceutical products. Adaptive designs are expected to have great potential to reduce the number of patients and duration of trial and to have relatively less exposure to new drug. Adaptive designs are not new in the sense that the task of interim analysis (IA)/review of the accumulated data used in adaptive designs existed in the past too. However, such reviews/analyses of accumulated data were not necessarily planned at the stage of planning clinical trial and the methods used were not necessarily compliant with clinical trial process. The Bayesian approach commonly used in adaptive designs was developed by Thomas Bayes in the 18th century, about hundred years prior to the development of modern statistical methods by the father of modern statistics, Sir Ronald A. Fisher, but the complexity involved in Bayesian approach prevented its use in real life practice. The advances in the field of computer and information technology over the last three to four decades has changed the scenario and the Bayesian techniques are being used in adaptive designs in addition to other sequential methods used in IA. This paper attempts to describe the various adaptive designs in clinical trial and views of stakeholders about feasibility of using them, without going into mathematical complexities.
Triple shape memory polymers by 4D printing
NASA Astrophysics Data System (ADS)
Bodaghi, M.; Damanpack, A. R.; Liao, W. H.
2018-06-01
This article aims at introducing triple shape memory polymers (SMPs) by four-dimensional (4D) printing technology and shaping adaptive structures for mechanical/bio-medical devices. The main approach is based on arranging hot–cold programming of SMPs with fused decomposition modeling technology to engineer adaptive structures with triple shape memory effect (SME). Experiments are conducted to characterize elasto-plastic and hyper-elastic thermo-mechanical material properties of SMPs in low and high temperatures at large deformation regime. The feasibility of the dual and triple SMPs with self-bending features is demonstrated experimentally. It is advantageous in situations either where it is desired to perform mechanical manipulations on the 4D printed objects for specific purposes or when they experience cold programming inevitably before activation. A phenomenological 3D constitutive model is developed for quantitative understanding of dual/triple SME of SMPs fabricated by 4D printing in the large deformation range. Governing equations of equilibrium are established for adaptive structures on the basis of the nonlinear Green–Lagrange strains. They are then solved by developing a finite element approach along with an elastic-predictor plastic-corrector return map procedure accomplished by the Newton–Raphson method. The computational tool is applied to simulate dual/triple SMP structures enabled by 4D printing and explore hot–cold programming mechanisms behind material tailoring. It is shown that the 4D printed dual/triple SMPs have great potential in mechanical/bio-medical applications such as self-bending gripers/stents and self-shrinking/tightening staples.
The Future of Adaptive Learning: Does the Crowd Hold the Key?
ERIC Educational Resources Information Center
Heffernan, Neil T.; Ostrow, Korinn S.; Kelly, Kim; Selent, Douglas; Van Inwegen, Eric G.; Xiong, Xiaolu; Williams, Joseph Jay
2016-01-01
Due to substantial scientific and practical progress, learning technologies can effectively adapt to the characteristics and needs of students. This article considers how learning technologies can adapt over time by crowdsourcing contributions from teachers and students--explanations, feedback, and other pedagogical interactions. Considering the…
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
NASA Astrophysics Data System (ADS)
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.
On Representative Spaceflight Instrument and Associated Instrument Sensor Web Framework
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Patel, Umeshkumar; Vootukuru, Meg
2007-01-01
Sensor Web-based adaptation and sharing of space flight mission resources, including those of the Space-Ground and Control-User communication segment, could greatly benefit from utilization of heritage Internet Protocols and devices applied for Spaceflight (SpaceIP). This had been successfully demonstrated by a few recent spaceflight experiments. However, while terrestrial applications of Internet protocols are well developed and understood (mostly due to billions of dollars in investments by the military and industry), the spaceflight application of Internet protocols is still in its infancy. Progress in the developments of SpaceIP-enabled instrument components will largely determine the SpaceIP utilization of those investments and acceptance in years to come. Likewise SpaceIP, the development of commercial real-time and instrument colocated computational resources, data compression and storage, can be enabled on-board a spacecraft and, in turn, support a powerful application to Sensor Web-based design of a spaceflight instrument. Sensor Web-enabled reconfiguration and adaptation of structures for hardware resources and information systems will commence application of Field Programmable Arrays (FPGA) and other aerospace programmable logic devices for what this technology was intended. These are a few obvious potential benefits of Sensor Web technologies for spaceflight applications. However, they are still waiting to be explored. This is because there is a need for a new approach to spaceflight instrumentation in order to make these mature sensor web technologies applicable for spaceflight. In this paper we present an approach in developing related and enabling spaceflight instrument-level technologies based on the new concept of a representative spaceflight Instrument Sensor Web (ISW).
Gay and Bisexual men's use of the Internet: Research from the 1990s through 2013
Grov, Christian; Breslow, Aaron S.; Newcomb, Michael E.; Rosenberger, Joshua G.; Bauermeister, Jose A
2014-01-01
In this review, we document the historical and cultural shifts in how gay and bisexual men have used the Internet for sexuality between the 1990s and 2013. Over that time, gay and bisexual men have rapidly taken to using the Internet for sexual purposes: sexual health information seeking, finding sex partners, dating, cybersex, and pornography. Gay and bisexual men have adapted to the ever-evolving technological advances that have been made in connecting users to the Internet—from logging into the World Wide Web via dial-up modem on a desktop computer to geo-social and sexual networking via a handheld device. In kind, researchers too have adapted to the Internet to study gay and bisexual men, though not at the same rapid pace at which technology (and its users) have advanced. Studies have carefully considered the ethics, feasibility, and acceptability of using the Internet to conduct research and interventions with gay and bisexual men. Much of this work has been grounded in models of disease prevention, largely as a result of the ongoing HIV/AIDS epidemic. The urgent need to reduce HIV in this population has been a driving force to develop innovative research and Internet-based intervention methodologies. Moving forward, a more holistic understanding of gay and bisexual men's sexual behavior might be warranted to address continued HIV and STI disparities. The Internet, and specifically mobile technology, is an environment gay and bisexual men are using for sexual purposes. These innovative technologies represent powerful resources for researchers to study and provide rapidly evolving outreach to gay and bisexual men. PMID:24754360
Adenle, Ademola A; Azadi, Hossein; Arbiol, Joseph
2015-09-15
Concerns about mitigating and adapting to climate change resulted in renewing the incentive for agricultural research investments and developing further innovation priorities around the world particularly in developing countries. In the near future, development of new agricultural measures and proper diffusion of technologies will greatly influence the ability of farmers in adaptation and mitigation to climate change. Using bibliometric approaches through output of academic journal publications and patent-based data, we assess the impact of research and development (R&D) for new and existing technologies within the context of climate change mitigation and adaptation. We show that many developing countries invest limited resources for R&D in relevant technologies that have great potential for mitigation and adaption in agricultural production. We also discuss constraints including weak infrastructure, limited research capacity, lack of credit facilities and technology transfer that may hinder the application of innovation in tackling the challenges of climate change. A range of policy measures is also suggested to overcome identified constraints and to ensure that potentials of innovation for climate change mitigation and adaptation are realized. Copyright © 2015 Elsevier Ltd. All rights reserved.
de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549
NASA Astrophysics Data System (ADS)
Barlow, Steven J.
1986-09-01
The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.