Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III
2005-01-01
Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-01-01
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate. PMID:26378533
Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase.
Fu, Li; Zhang, Jun; Li, Rui; Cao, Xianbin; Wang, Jinling
2015-09-10
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.
Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)
NASA Astrophysics Data System (ADS)
Ashcraft, Todd W.; Atac, Robert
2012-06-01
Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
Flight Testing an Integrated Synthetic Vision System
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.
NASA Technical Reports Server (NTRS)
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)
2010-03-01
technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D
Robotic space simulation integration of vision algorithms into an orbital operations simulation
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1987-01-01
In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.
Grid Integration Webinars | Energy Systems Integration Facility | NREL
Vision Future. The study used detailed nodal simulations of the Western Interconnection system with greater than 35% wind energy, based on scenarios from the DOE Wind Vision study to assess the operability Renewable Energy Integration in California April 14, 2016 Greg Brinkman discussed the Low Carbon Grid Study
Computational approaches to vision
NASA Technical Reports Server (NTRS)
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.
2005-01-01
Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.
Improving Federal Education Programs through an Integrated Performance and Benchmarking System.
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Office of the Under Secretary.
This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…
Integrated navigation, flight guidance, and synthetic vision system for low-level flight
NASA Astrophysics Data System (ADS)
Mehler, Felix E.
2000-06-01
Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.
The precision measurement and assembly for miniature parts based on double machine vision systems
NASA Astrophysics Data System (ADS)
Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.
2015-02-01
In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Design And Implementation Of Integrated Vision-Based Robotic Workcells
NASA Astrophysics Data System (ADS)
Chen, Michael J.
1985-01-01
Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.
Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft
2017-06-01
International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing
NASA Astrophysics Data System (ADS)
Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan
2010-02-01
The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.
Optical needs of students with low vision in integrated schools of Nepal.
Gnyawali, Subodh; Shrestha, Jyoti Baba; Bhattarai, Dipesh; Upadhyay, Madan
2012-12-01
To identify the optical needs of students with low vision studying in the integrated schools for the blind in Nepal. A total of 779 blind and vision-impaired students studying in 67 integrated schools for the blind across Nepal were examined using the World Health Organization/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision. Glasses and low-vision devices were provided to the students with low vision who showed improvement in visual acuity up to a level that was considered sufficient for classroom learning. Follow-up on the use and maintenance of device provided was done after a year. Almost 78% of students studying in the integrated schools for the blind were not actually blind; they had low vision. Five students were found to be wrongly enrolled. Avoidable causes of blindness were responsible for 41% of all blindness. Among 224 students who had visual acuity 1/60 or better, distance vision could be improved in 18.7% whereas near vision could be improved in 41.1% students. Optical intervention provided improved vision in 48.2% of students who were learning braille. Only 34.8% students were found to be using the devices regularly after assessment 1 year later; the most common causes for nonuse were damage or misplacement of the device. A high proportion of students with low vision in integrated schools could benefit from optical intervention. A system of comprehensive eye examination at the time of school enrollment would allow students with low vision to use their available vision to the fullest, encourage print reading over braille, ensure appropriate placement, and promote timely adoption and proper usage of optical device.
Advanced integrated enhanced vision systems
NASA Astrophysics Data System (ADS)
Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha
2003-09-01
In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widergren, Steven E.; Knight, Mark R.; Melton, Ronald B.
The Interoperability Strategic Vision whitepaper aims to promote a common understanding of the meaning and characteristics of interoperability and to provide a strategy to advance the state of interoperability as applied to integration challenges facing grid modernization. This includes addressing the quality of integrating devices and systems and the discipline to improve the process of successfully integrating these components as business models and information technology improve over time. The strategic vision for interoperability described in this document applies throughout the electric energy generation, delivery, and end-use supply chain. Its scope includes interactive technologies and business processes from bulk energy levelsmore » to lower voltage level equipment and the millions of appliances that are becoming equipped with processing power and communication interfaces. A transformational aspect of a vision for interoperability in the future electric system is the coordinated operation of intelligent devices and systems at the edges of grid infrastructure. This challenge offers an example for addressing interoperability concerns throughout the electric system.« less
Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Young, Steven D.
2005-01-01
In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2007-01-01
The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Hierarchical Modelling Of Mobile, Seeing Robots
NASA Astrophysics Data System (ADS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-03-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
Hierarchical modelling of mobile, seeing robots
NASA Technical Reports Server (NTRS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1990-01-01
This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection
D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin
1993-01-01
A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...
Cohesive ARMD Full UAS Integration Strategy
NASA Technical Reports Server (NTRS)
Hackenberg, Davis
2017-01-01
Introduction / Background; Current Landscape and Future Vision; UAS (Unmanned Aircraft System) Demand and Key Challenges; UAS Airspace Access Pillars and Enablers; Overarching UAS Community Strategy; Long Term Vision Considerations; Recommendations and Next Steps.
Franco-Trigo, L; Tudball, J; Fam, D; Benrimoj, S I; Sabater-Hernández, D
2018-02-21
Collaboration between relevant stakeholders in health service planning enables service contextualization and facilitates its success and integration into practice. Although community pharmacy services (CPSs) aim to improve patients' health and quality of life, their integration in primary care is far from ideal. Key stakeholders for the development of a CPS intended at preventing cardiovascular disease were identified in a previous stakeholder analysis. Engaging these stakeholders to create a shared vision is the subsequent step to focus planning directions and lay sound foundations for future work. This study aims to develop a stakeholder-shared vision of a cardiovascular care model which integrates community pharmacists and to identify initiatives to achieve this vision. A participatory visioning exercise involving 13 stakeholders across the healthcare system was performed. A facilitated workshop, structured in three parts (i.e., introduction; developing the vision; defining the initiatives towards the vision), was designed. The Chronic Care Model inspired the questions that guided the development of the vision. Workshop transcripts, researchers' notes and materials produced by participants were analyzed using qualitative content analysis. Stakeholders broadened the objective of the vision to focus on the management of chronic diseases. Their vision yielded 7 principles for advanced chronic care: patient-centered care; multidisciplinary team approach; shared goals; long-term care relationships; evidence-based practice; ease of access to healthcare settings and services by patients; and good communication and coordination. Stakeholders also delineated six environmental factors that can influence their implementation. Twenty-four initiatives to achieve the developed vision were defined. The principles and factors identified as part of the stakeholder shared-vision were combined in a preliminary model for chronic care. This model and initiatives can guide policy makers as well as healthcare planners and researchers to develop and integrate chronic disease services, namely CPSs, in real-world settings. Copyright © 2018 Elsevier Inc. All rights reserved.
Haldin, Charlotte; Nymark, Soile; Aho, Ann-Christine; Koskelainen, Ari; Donner, Kristian
2009-05-06
Human vision is approximately 10 times less sensitive than toad vision on a cool night. Here, we investigate (1) how far differences in the capacity for temporal integration underlie such differences in sensitivity and (2) whether the response kinetics of the rod photoreceptors can explain temporal integration at the behavioral level. The toad was studied as a model that allows experimentation at different body temperatures. Sensitivity, integration time, and temporal accuracy of vision were measured psychophysically by recording snapping at worm dummies moving at different velocities. Rod photoresponses were studied by ERG recording across the isolated retina. In both types of experiments, the general timescale of vision was varied by using two temperatures, 15 and 25 degrees C. Behavioral integration times were 4.3 s at 15 degrees C and 0.9 s at 25 degrees C, and rod integration times were 4.2-4.3 s at 15 degrees C and 1.0-1.3 s at 25 degrees C. Maximal behavioral sensitivity was fivefold lower at 25 degrees C than at 15 degrees C, which can be accounted for by inability of the "warm" toads to integrate light over longer times than the rods. However, the long integration time at 15 degrees C, allowing high sensitivity, degraded the accuracy of snapping toward quickly moving worms. We conclude that temporal integration explains a considerable part of all variation in absolute visual sensitivity. The strong correlation between rods and behavior suggests that the integration time of dark-adapted vision is set by rod phototransduction at the input to the visual system. This implies that there is an inexorable trade-off between temporal integration and resolution.
Ethical, environmental and social issues for machine vision in manufacturing industry
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Whelan, Paul F.
1995-10-01
Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.
NASA Technical Reports Server (NTRS)
2005-01-01
The Transformational Concept of Operations (CONOPS) provides a long-term, sustainable vision for future U.S. space transportation infrastructure and operations. This vision presents an interagency concept, developed cooperatively by the Department of Defense (DoD), the Federal Aviation Administration (FAA), and the National Aeronautics and Space Administration (NASA) for the upgrade, integration, and improved operation of major infrastructure elements of the nation s space access systems. The interagency vision described in the Transformational CONOPS would transform today s space launch infrastructure into a shared system that supports worldwide operations for a variety of users. The system concept is sufficiently flexible and adaptable to support new types of missions for exploration, commercial enterprise, and national security, as well as to endure further into the future when space transportation technology may be sufficiently advanced to enable routine public space travel as part of the global transportation system. The vision for future space transportation operations is based on a system-of-systems architecture that integrates the major elements of the future space transportation system - transportation nodes (spaceports), flight vehicles and payloads, tracking and communications assets, and flight traffic coordination centers - into a transportation network that concurrently accommodates multiple types of mission operators, payloads, and vehicle fleets. This system concept also establishes a common framework for defining a detailed CONOPS for the major elements of the future space transportation system. The resulting set of four CONOPS (see Figure 1 below) describes the common vision for a shared future space transportation system (FSTS) infrastructure from a variety of perspectives.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
NASA Technical Reports Server (NTRS)
1999-01-01
Amherst Systems manufactures foveal machine vision technology and systems commercially available to end-users and system integrators. This technology was initially developed under NASA contracts NAS9-19335 (Johnson Space Center) and NAS1-20841 (Langley Research Center). This technology is currently being delivered to university research facilities and military sites. More information may be found in www.amherst.com.
Vision and dual IMU integrated attitude measurement system
NASA Astrophysics Data System (ADS)
Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang
2018-01-01
To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-12-17
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III
2007-01-01
NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.
Computer vision for general purpose visual inspection: a fuzzy logic approach
NASA Astrophysics Data System (ADS)
Chen, Y. H.
In automatic visual industrial inspection, computer vision systems have been widely used. Such systems are often application specific, and therefore require domain knowledge in order to have a successful implementation. Since visual inspection can be viewed as a decision making process, it is argued that the integration of fuzzy logic analysis and computer vision systems provides a practical approach to general purpose visual inspection applications. This paper describes the development of an integrated fuzzy-rule-based automatic visual inspection system. Domain knowledge about a particular application is represented as a set of fuzzy rules. From the status of predefined fuzzy variables, the set of fuzzy rules are defuzzified to give the inspection results. A practical application where IC marks (often in the forms of English characters and a company logo) inspection is demonstrated, which shows a more consistent result as compared to a conventional thresholding method.
Issues in Humanoid Audition and Sound Source Localization by Active Audition
NASA Astrophysics Data System (ADS)
Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki
In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.
A computer vision system for the recognition of trees in aerial photographs
NASA Technical Reports Server (NTRS)
Pinz, Axel J.
1991-01-01
Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.
Integration of USB and firewire cameras in machine vision applications
NASA Astrophysics Data System (ADS)
Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard
1999-08-01
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-01-01
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318
ROBOSIGHT: Robotic Vision System For Inspection And Manipulation
NASA Astrophysics Data System (ADS)
Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh
1989-02-01
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
NASA Technical Reports Server (NTRS)
Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray
2004-01-01
With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.
Evolving EO-1 Sensor Web Testbed Capabilities in Pursuit of GEOSS
NASA Technical Reports Server (NTRS)
Mandi, Dan; Ly, Vuong; Frye, Stuart; Younis, Mohamed
2006-01-01
A viewgraph presentation to evolve sensor web capabilities in pursuit of capabilities to support Global Earth Observing System of Systems (GEOSS) is shown. The topics include: 1) Vision to Enable Sensor Webs with "Hot Spots"; 2) Vision Extended for Communication/Control Architecture for Missions to Mars; 3) Key Capabilities Implemented to Enable EO-1 Sensor Webs; 4) One of Three Experiments Conducted by UMBC Undergraduate Class 12-14-05 (1 - 3); 5) Closer Look at our Mini-Rovers and Simulated Mars Landscae at GSFC; 6) Beginning to Implement Experiments with Standards-Vision for Integrated Sensor Web Environment; 7) Goddard Mission Services Evolution Center (GMSEC); 8) GMSEC Component Catalog; 9) Core Flight System (CFS) and Extension for GMSEC for Flight SW; 10) Sensor Modeling Language; 11) Seamless Ground to Space Integrated Message Bus Demonstration (completed December 2005); 12) Other Experiments in Queue; 13) Acknowledgements; and 14) References.
A clinical information systems strategy for a large integrated delivery network.
Kuperman, G. J.; Spurr, C.; Flammini, S.; Bates, D.; Glaser, J.
2000-01-01
Integrated delivery networks (IDNs) are an emerging class of health care institutions. IDNs are formed from the affiliation of individual health care institutions and are intended to be more efficient in the current fiscal health care environment. To realize efficiencies and support their strategic visions, IDNs rely critically on excellent information technology (IT). Because of its importance to the mission of the IDN, strategic decisions about IT are made by the top leadership of the IDN. At Partners HealthCare System, a large IDN in Boston, MA, a clinical information systems strategy has been created to support the Partners clinical vision. In this paper, we discuss the Partners' structure, clinical vision, and current IT initiatives in place to address the clinical vision. The initiatives are: a clinical data repository, inpatient process support, electronic medical records, a portal strategy, referral applications, knowledge resources, support for product lines, patient computing, confidentiality, and clinical decision support. We address several of the issues encountered in trying to bring excellent information technology to a large IDN. PMID:11079921
Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base
NASA Astrophysics Data System (ADS)
Guo, Xiaoting; Sun, Changku; Wang, Peng; Huang, Lu
2018-01-01
To determine the relative attitude between the objects on a moving base and the base reference system by a MEMS (Micro-Electro-Mechanical Systems) gyroscope, the motion information of the base is redundant, which must be removed from the gyroscope. Our strategy is to add an auxiliary gyroscope attached to the reference system. The master gyroscope is to sense the total motion, and the auxiliary gyroscope is to sense the motion of the moving base. By a generalized difference method, relative attitude in a non-inertial frame can be determined by dual gyroscopes. With the vision sensor suppressing accumulative drift of the MEMS gyroscope, the vision and dual MEMS gyroscope integration system is formed. Coordinate system definitions and spatial transform are executed in order to fuse inertial and visual data from different coordinate systems together. And a nonlinear filter algorithm, Cubature Kalman filter, is used to fuse slow visual data and fast inertial data together. A practical experimental setup is built up and used to validate feasibility and effectiveness of our proposed attitude determination system in the non-inertial frame on the moving base.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
Advancing adverse outcome pathways for integrated toxicology and regulatory applications
Recent regulatory efforts in many countries have focused on a toxicological pathway-based vision for human health assessments relying on in vitro systems and predictive models to generate the toxicological data needed to evaluate chemical hazard. A pathway-based vision is equally...
Medical informatics and telemedicine: A vision
NASA Technical Reports Server (NTRS)
Clemmer, Terry P.
1991-01-01
The goal of medical informatics is to improve care. This requires the commitment and harmonious collaboration between the computer scientists and clinicians and an integrated database. The vision described is how medical information systems are going to impact the way medical care is delivered in the future.
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
Detmer, D E
2010-01-01
Substantial global and national commitment will be required for current healthcare systems and health professional practices to become learning care systems utilizing information and communications technology (ICT) empowered by informatics. To engage this multifaceted challenge, a vision is required that shifts the emphasis from silos of activities toward integrated systems. Successful systems will include a set of essential elements, e.g., a sufficient ICT infrastructure, evolving health care processes based on evidence and harmonized to local cultures, a fresh view toward educational preparation, sound and sustained policy support, and ongoing applied research and development. Increasingly, leaders are aware that ICT empowered by informatics must be an integral part of their national and regional visions. This paper sketches out the elements of what is needed in terms of objectives and some steps toward achieving them. It summarizes some of the progress that has been made to date by the American and International Medical Informatics Associations working separately as well as collaborating to conceptualize informatics capacity building in order to bring this vision to reality in low resource nations in particular.
Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems
NASA Technical Reports Server (NTRS)
Liu, Xuan; Furrer, David; Kosters, Jared; Holmes, Jack
2018-01-01
Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.
2008-01-01
NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.
Function-based design process for an intelligent ground vehicle vision system
NASA Astrophysics Data System (ADS)
Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.
2010-10-01
An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
Visions of Automation and Realities of Certification
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Holloway, Michael C.
2005-01-01
Quite a lot of people envision automation as the solution to many of the problems in aviation and air transportation today, across all sectors: commercial, private, and military. This paper explains why some recent experiences with complex, highly-integrated, automated systems suggest that this vision will not be realized unless significant progress is made over the current state-of-the-practice in software system development and certification.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
Public health policy for preventing violence.
Mercy, J A; Rosenberg, M L; Powell, K E; Broome, C V; Roper, W L
1993-01-01
The current epidemic of violence in America threatens not only our physical health but also the integrity of basic social institutions such as the family, the communities in which we live, and our health care system. Public health brings a new vision of how Americans can work together to prevent violence. This new vision places emphasis on preventing violence before it occurs, making science integral to identifying effective policies and programs, and integrating the efforts of diverse scientific disciplines, organizations, and communities. A sustained effort at all levels of society will be required to successfully address this complex and deeply rooted problem.
Human Systems Integration at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
McCandless, Jeffrey
2017-01-01
The Human Systems Integration Division focuses on the design and operations of complex aerospace systems through analysis, experimentation and modeling. With over a dozen labs and over 120 people, the division conducts research to improve safety, efficiency and mission success. Areas of investigation include applied vision research which will be discussed during this seminar.
NASA Astrophysics Data System (ADS)
Kim, J.
2016-12-01
Considering high levels of uncertainty, epistemological conflicts over facts and values, and a sense of urgency, normal paradigm-driven science will be insufficient to mobilize people and nation toward sustainability. The conceptual framework to bridge the societal system dynamics with that of natural ecosystems in which humanity operates remains deficient. The key to understanding their coevolution is to understand `self-organization.' Information-theoretic approach may shed a light to provide a potential framework which enables not only to bridge human and nature but also to generate useful knowledge for understanding and sustaining the integrity of ecological-societal systems. How can information theory help understand the interface between ecological systems and social systems? How to delineate self-organizing processes and ensure them to fulfil sustainability? How to evaluate the flow of information from data through models to decision-makers? These are the core questions posed by sustainability science in which visioneering (i.e., the engineering of vision) is an essential framework. Yet, visioneering has neither quantitative measure nor information theoretic framework to work with and teach. This presentation is an attempt to accommodate the framework of self-organizing hierarchical open systems with visioneering into a common information-theoretic framework. A case study is presented with the UN/FAO's communal vision of climate-smart agriculture (CSA) which pursues a trilemma of efficiency, mitigation, and resilience. Challenges of delineating and facilitating self-organizing systems are discussed using transdisciplinary toold such as complex systems thinking, dynamic process network analysis and multi-agent systems modeling. Acknowledgments: This study was supported by the Korea Meteorological Administration Research and Development Program under Grant KMA-2012-0001-A (WISE project).
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Integrating Child Health Information Systems
Hinman, Alan R.; Eichwald, John; Linzer, Deborah; Saarlas, Kristin N.
2005-01-01
The Health Resources and Services Administration and All Kids Count (a national technical assistance center fostering development of integrated child health information systems) have been working together to foster development of integrated child health information systems. Activities have included: identification of key elements for successful integration of systems; development of principles and core functions for the systems; a survey of state and local integration efforts; and a conference to develop a common vision for child health information systems to meet medical care and public health needs. We provide 1 state (Utah) as an example that is well on the way to development of integrated child health information systems. PMID:16195524
Fusion of Synthetic and Enhanced Vision for All-Weather Commercial Aviation Operations
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence, III
2007-01-01
NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were not adversely impacted by the display concepts although the addition of Enhanced Vision did not, unto itself, provide an improvement in runway incursion detection.
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision
Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069
FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.
Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhr, L.
1987-01-01
This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
Integrated 3-D vision system for autonomous vehicles
NASA Astrophysics Data System (ADS)
Hou, Kun M.; Shawky, Mohamed; Tu, Xiaowei
1992-03-01
Nowadays, autonomous vehicles have become a multidiscipline field. Its evolution is taking advantage of the recent technological progress in computer architectures. As the development tools became more sophisticated, the trend is being more specialized, or even dedicated architectures. In this paper, we will focus our interest on a parallel vision subsystem integrated in the overall system architecture. The system modules work in parallel, communicating through a hierarchical blackboard, an extension of the 'tuple space' from LINDA concepts, where they may exchange data or synchronization messages. The general purpose processing elements are of different skills, built around 40 MHz i860 Intel RISC processors for high level processing and pipelined systolic array processors based on PLAs or FPGAs for low-level processing.
High-accuracy microassembly by intelligent vision systems and smart sensor integration
NASA Astrophysics Data System (ADS)
Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael
2003-10-01
Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.
A Practitioner's Perspective on Taxonomy, Ontology and Findability
NASA Technical Reports Server (NTRS)
Berndt, Sarah
2011-01-01
This slide presentation reviews the presenters perspective on developing a taxonomy for JSC to capitalize on the accomplishments of yesterday, while maintaining the flexibility needed for the evolving information of today. A clear vision and scope for the semantic system is integral to its success. The vision for the JSC Taxonomy is to connect information stovepipes to present a unified view for information and knowledge across the Center, across organizations, and across decades. Semantic search at JSC means seamless integration of disparate information sets into a single interface. Ever increasing use, interest, and organizational participation mark successful integration and provide the framework for future application.
Taxonomy, Ontology and Semantics at Johnson Space Center
NASA Technical Reports Server (NTRS)
Berndt, Sarah Ann
2011-01-01
At NASA Johnson Space Center (JSC), the Chief Knowledge Officer has been developing the JSC Taxonomy to capitalize on the accomplishments of yesterday while maintaining the flexibility needed for the evolving information environment of today. A clear vision and scope for the semantic system is integral to its success. The vision for the JSC Taxonomy is to connect information stovepipes to present a unified view for information and knowledge across the Center, across organizations, and across decades. Semantic search at JSC means seemless integration of disparate information sets into a single interface. Ever increasing use, interest, and organizational participation mark successful integration and provide the framework for future application.
NASA Astrophysics Data System (ADS)
Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay
2017-12-01
Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
Telerobotic controller development
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, Ken; Rhoades, Don
1987-01-01
To meet NASA's space station's needs and growth, a modular and generic approach to robotic control which provides near-term implementation with low development cost and capability for growth into more autonomous systems was developed. The method uses a vision based robotic controller and compliant hand integrated with the Remote Manipulator System arm on the Orbiter. A description of the hardware and its system integration is presented.
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III; Wilz, Susan J.
2009-01-01
NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. Improvements in lateral path control performance were realized when the Head-Up Display concepts included a tunnel, independent of the imagery (enhanced vision or fusion of enhanced and synthetic vision) presented with it. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, of itself, provide an improvement in runway incursion detection without being specifically tailored for this application.
Evaluation of the advanced operating system of the Ann Arbor Transit Authority
DOT National Transportation Integrated Search
1999-10-01
These reports constitute an evaluation of the intelligent transportation system deployment efforts of the Ann Arbor Transportation Authority. These efforts, collectively termed "Advanced Operating System" (AOS), represent a vision of an integrated ad...
Latency Requirements for Head-Worn Display S/EVS Applications
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Trey Arthur, J. J., III; Williams, Steven P.
2004-01-01
NASA s Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas flight control, flight simulation, and virtual reality are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.
Latency requirements for head-worn display S/EVS applications
NASA Astrophysics Data System (ADS)
Bailey, Randall E.; Arthur, Jarvis J., III; Williams, Steven P.
2004-08-01
NASA's Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas - flight control, flight simulation, and virtual reality - are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.
The Application of Lidar to Synthetic Vision System Integrity
NASA Technical Reports Server (NTRS)
Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve
2003-01-01
One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.
Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III
2006-01-01
NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Machine vision for digital microfluidics
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun; Lee, Jeong-Bong
2010-01-01
Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.
NASA Astrophysics Data System (ADS)
Lee, El-Hang; Lee, S. G.; O, B. H.; Park, S. G.; Noh, H. S.; Kim, K. H.; Song, S. H.
2006-09-01
A collective overview and review is presented on the original work conducted on the theory, design, fabrication, and in-tegration of micro/nano-scale optical wires and photonic devices for applications in a newly-conceived photonic systems called "optical printed circuit board" (O-PCBs) and "VLSI photonic integrated circuits" (VLSI-PIC). These are aimed for compact, high-speed, multi-functional, intelligent, light-weight, low-energy and environmentally friendly, low-cost, and high-volume applications to complement or surpass the capabilities of electrical PCBs (E-PCBs) and/or VLSI electronic integrated circuit (VLSI-IC) systems. These consist of 2-dimensional or 3-dimensional planar arrays of micro/nano-optical wires and circuits to perform the functions of all-optical sensing, storing, transporting, processing, switching, routing and distributing optical signals on flat modular boards or substrates. The integrated optical devices include micro/nano-scale waveguides, lasers, detectors, switches, sensors, directional couplers, multi-mode interference devices, ring-resonators, photonic crystal devices, plasmonic devices, and quantum devices, made of polymer, silicon and other semiconductor materials. For VLSI photonic integration, photonic crystals and plasmonic structures have been used. Scientific and technological issues concerning the processes of miniaturization, interconnection and integration of these systems as applicable to board-to-board, chip-to-chip, and intra-chip integration, are discussed along with applications for future computers, telecommunications, and sensor-systems. Visions and challenges toward these goals are also discussed.
Harley, H E; Roitblat, H L; Nachtigall, P E
1996-04-01
A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.
NASA Technical Reports Server (NTRS)
Glass, Charles E.; Boyd, Richard V.; Sternberg, Ben K.
1991-01-01
The overall aim is to provide base technology for an automated vision system for on-board interpretation of geophysical data. During the first year's work, it was demonstrated that geophysical data can be treated as patterns and interpreted using single neural networks. Current research is developing an integrated vision system comprising neural networks, algorithmic preprocessing, and expert knowledge. This system is to be tested incrementally using synthetic geophysical patterns, laboratory generated geophysical patterns, and field geophysical patterns.
A Systems Approach to School Reform.
ERIC Educational Resources Information Center
McAdams, Richard P.
1997-01-01
Summarizes leading scholars' findings in leadership theory, local politics and government, state and national school politics, and change theory. Integrating this knowledge into a systematic reform effort requires superintendents with integrity and vision; political stability; good board/superintendent relations; long-term, statewide commitment;…
A landscape vision for integrating industrial crops into biofuel systems
USDA-ARS?s Scientific Manuscript database
Achieving energy independence and security through domestic production of renewable biofuels is feasible but will require a different landscape than we have with current agricultural practices. Integrating industrial crops such as Canola, Camelina, or Cuphea could offer many opportunities to enhance...
A Model for Integrating Low Vision Services into Educational Programs.
ERIC Educational Resources Information Center
Jose, Randall T.; And Others
1988-01-01
A project integrating low-vision services into children's educational programs comprised four components: teacher training, functional vision evaluations for each child, a clinical examination by an optometrist, and follow-up visits with the optometrist to evaluate the prescribed low-vision aids. Educational implications of the project and project…
The 3-D vision system integrated dexterous hand
NASA Technical Reports Server (NTRS)
Luo, Ren C.; Han, Youn-Sik
1989-01-01
Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
Night vision imaging system design, integration and verification in spacecraft vacuum thermal test
NASA Astrophysics Data System (ADS)
Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing
2015-08-01
The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.
DOT National Transportation Integrated Search
1998-01-01
To achieve unprecedented levels of integration, AZTech would be required to do no less than set new standards for inter-agency and public/private cooperation. The first step was to achieve institutional integration. This involved forming an effective...
Beam Splitter For Welding-Torch Vision System
NASA Technical Reports Server (NTRS)
Gilbert, Jeffrey L.
1991-01-01
Compact welding torch equipped with along-the-torch vision system includes cubic beam splitter to direct preview light on weldment and to reflect light coming from welding scene for imaging. Beam splitter integral with torch; requires no external mounting brackets. Rugged and withstands vibrations and wide range of temperatures. Commercially available, reasonably priced, comes in variety of sizes and optical qualities with antireflection and interference-filter coatings on desired faces. Can provide 50 percent transmission and 50 percent reflection of incident light to exhibit minimal ghosting of image.
Integrated Collision Avoidance System for Air Vehicle
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2013-01-01
Collision with ground/water/terrain and midair obstacles is one of the common causes of severe aircraft accidents. The various data from the coremicro AHRS/INS/GPS Integration Unit, terrain data base, and object detection sensors are processed to produce collision warning audio/visual messages and collision detection and avoidance of terrain and obstacles through generation of guidance commands in a closed-loop system. The vision sensors provide more information for the Integrated System, such as, terrain recognition and ranging of terrain and obstacles, which plays an important role to the improvement of the Integrated Collision Avoidance System.
2007-07-01
SAS System Analysis and Studies Panel • SCI Systems Concepts and Integration Panel • SET Sensors and Electronics Technology Panel These...Daylight Readability 4-2 4.1.4 Night-Time Readability 4-2 4.1.5 NVIS Radiance 4-2 4.1.6 Human Factors Analysis 4-3 4.1.7 Flight Tests 4-3 4.1.7.1...position is shadowing. Moonlight creates shadows during night-time just as sunlight does during the day. Understanding what cannot be seen in night-time
Computer interfaces for the visually impaired
NASA Technical Reports Server (NTRS)
Higgins, Gerry
1991-01-01
Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.
A study on integrating surveys of terrestrial natural resources: The Oregon Demonstration Project
J. Jeffery Goebel; Hans T. Schreuder; Carol C. House; Paul H. Geissler; Anthony R. Olsen; William Williams
1998-01-01
An interagency project demonstrated the feasibility of integrating Federal surveys of terrestrial natural resources and offers a vision for that integration. At locations selected from forest inventory and analysis, National forest system Region 6, and national resources inventory surveys in a six-county area in Northern Oregon, experienced teams interpreted and made...
DOT National Transportation Integrated Search
1998-01-01
To achieve the full integration of varied traffic management, : emergency services and transit systems in the sprawling Valley of the Sun metropolitan area, no small amount of coordination is required. : In the interest of smoothing vehicle traffic t...
NASA Technical Reports Server (NTRS)
By, Andre Bernard; Caron, Ken; Rothenberg, Michael; Sales, Vic
1994-01-01
This paper presents the first phase results of a collaborative effort between university researchers and a flexible assembly systems integrator to implement a comprehensive modular approach to flexible assembly automation. This approach, named MARAS (Modular Automated Reconfigurable Assembly System), has been structured to support multiple levels of modularity in terms of both physical components and system control functions. The initial focus of the MARAS development has been on parts gauging and feeding operations for cylinder lock assembly. This phase is nearing completion and has resulted in the development of a highly configurable system for vision gauging functions on a wide range of small components (2 mm to 100 mm in size). The reconfigurable concepts implemented in this adaptive Vision Gauging Module (VGM) are now being extended to applicable aspects of the singulating, selecting, and orienting functions required for the flexible feeding of similar mechanical components and assemblies.
Integrating emergency services in an urban health system.
Radloff, D; Blouin, A S; Larsen, L; Kripp, M E
2000-03-01
When planning for growth and management efficiency across urban health systems, economic and market factors present significant service line challenges and opportunities. This article describes the evolutionary integration of emergency services in St John Health System, a large, religious-sponsored health care system located in Detroit, Michigan. Critical business elements, including the System's vision, mission, and economic context, are defined as the framework for site-specific and System-wide planning. The impact of managed care and market changes prompted St John's clinicians and executives to explore how integrating emergency services could create a competitive market advantage.
DOT National Transportation Integrated Search
2002-12-01
The Virginia Department of Transportation, like many other transportation agencies, has invested significantly in extensive closed circuit television (CCTV) systems to monitor freeways in urban areas. Although these systems have proven very effective...
Notes from a clinical information system program manager. A solid vision makes all the difference.
Staggers, N
1997-01-01
Today's CIS manager will create a vision that connects computerization in ambulatory, home and community-based care with increased responsibility for patients to assume self-care. Patients will be faced with a glut of information and they will need nursing help in determining the validity of information. The new vision in this environment will focus on integration, interoperability, and a new definition for patient-centered information. Creating a well-articulated vision is the first skill in the repertoire of a CIS manager's tool set. A vision provides the firm structure upon which the entire project can be built, and provides for links to life-cycle planning. This first step in project planning begins to bring order to the chaos of dynamic demands in clinical computing.
Integration of a 3D perspective view in the navigation display: featuring pilot's mental model
NASA Astrophysics Data System (ADS)
Ebrecht, L.; Schmerwitz, S.
2015-05-01
Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.
Developing a Health Information Technology Systems Matrix: A Qualitative Participatory Approach
Chavez, Margeaux; Nazi, Kim M; Antinori, Nicole
2016-01-01
Background The US Department of Veterans Affairs (VA) has developed various health information technology (HIT) resources to provide accessible veteran-centered health care. Currently, the VA is undergoing a major reorganization of VA HIT to develop a fully integrated system to meet consumer needs. Although extensive system documentation exists for various VA HIT systems, a more centralized and integrated perspective with clear documentation is needed in order to support effective analysis, strategy, planning, and use. Such a tool would enable a novel view of what is currently available and support identifying and effectively capturing the consumer’s vision for the future. Objective The objective of this study was to develop the VA HIT Systems Matrix, a novel tool designed to describe the existing VA HIT system and identify consumers’ vision for the future of an integrated VA HIT system. Methods This study utilized an expert panel and veteran informant focus groups with self-administered surveys. The study employed participatory research methods to define the current system and understand how stakeholders and veterans envision the future of VA HIT and interface design (eg, look, feel, and function). Directed content analysis was used to analyze focus group data. Results The HIT Systems Matrix was developed with input from 47 veterans, an informal caregiver, and an expert panel to provide a descriptive inventory of existing and emerging VA HIT in four worksheets: (1) access and function, (2) benefits and barriers, (3) system preferences, and (4) tasks. Within each worksheet is a two-axis inventory. The VA’s existing and emerging HIT platforms (eg, My HealtheVet, Mobile Health, VetLink Kiosks, Telehealth), My HealtheVet features (eg, Blue Button, secure messaging, appointment reminders, prescription refill, vet library, spotlight, vitals tracker), and non-VA platforms (eg, phone/mobile phone, texting, non-VA mobile apps, non-VA mobile electronic devices, non-VA websites) are organized by row. Columns are titled with thematic and functional domains (eg, access, function, benefits, barriers, authentication, delegation, user tasks). Cells for each sheet include descriptions and details that reflect factors relevant to domains and the topic of each worksheet. Conclusions This study provides documentation of the current VA HIT system and efforts for consumers’ vision of an integrated system redesign. The HIT Systems Matrix provides a consumer preference blueprint to inform the current VA HIT system and the vision for future development to integrate electronic resources within VA and beyond with non-VA resources. The data presented in the HIT Systems Matrix are relevant for VA administrators and developers as well as other large health care organizations seeking to document and organize their consumer-facing HIT resources. PMID:27713112
Developing a Health Information Technology Systems Matrix: A Qualitative Participatory Approach.
Haun, Jolie N; Chavez, Margeaux; Nazi, Kim M; Antinori, Nicole
2016-10-06
The US Department of Veterans Affairs (VA) has developed various health information technology (HIT) resources to provide accessible veteran-centered health care. Currently, the VA is undergoing a major reorganization of VA HIT to develop a fully integrated system to meet consumer needs. Although extensive system documentation exists for various VA HIT systems, a more centralized and integrated perspective with clear documentation is needed in order to support effective analysis, strategy, planning, and use. Such a tool would enable a novel view of what is currently available and support identifying and effectively capturing the consumer's vision for the future. The objective of this study was to develop the VA HIT Systems Matrix, a novel tool designed to describe the existing VA HIT system and identify consumers' vision for the future of an integrated VA HIT system. This study utilized an expert panel and veteran informant focus groups with self-administered surveys. The study employed participatory research methods to define the current system and understand how stakeholders and veterans envision the future of VA HIT and interface design (eg, look, feel, and function). Directed content analysis was used to analyze focus group data. The HIT Systems Matrix was developed with input from 47 veterans, an informal caregiver, and an expert panel to provide a descriptive inventory of existing and emerging VA HIT in four worksheets: (1) access and function, (2) benefits and barriers, (3) system preferences, and (4) tasks. Within each worksheet is a two-axis inventory. The VA's existing and emerging HIT platforms (eg, My HealtheVet, Mobile Health, VetLink Kiosks, Telehealth), My HealtheVet features (eg, Blue Button, secure messaging, appointment reminders, prescription refill, vet library, spotlight, vitals tracker), and non-VA platforms (eg, phone/mobile phone, texting, non-VA mobile apps, non-VA mobile electronic devices, non-VA websites) are organized by row. Columns are titled with thematic and functional domains (eg, access, function, benefits, barriers, authentication, delegation, user tasks). Cells for each sheet include descriptions and details that reflect factors relevant to domains and the topic of each worksheet. This study provides documentation of the current VA HIT system and efforts for consumers' vision of an integrated system redesign. The HIT Systems Matrix provides a consumer preference blueprint to inform the current VA HIT system and the vision for future development to integrate electronic resources within VA and beyond with non-VA resources. The data presented in the HIT Systems Matrix are relevant for VA administrators and developers as well as other large health care organizations seeking to document and organize their consumer-facing HIT resources.
NASA Astrophysics Data System (ADS)
Durfee, David; Johnson, Walter; McLeod, Scott
2007-04-01
Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan
2017-06-01
The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.
Dissolvable tattoo sensors: from science fiction to a viable technology
NASA Astrophysics Data System (ADS)
Cheng, Huanyu; Yi, Ning
2017-01-01
Early surrealistic painting and science fiction movies have envisioned dissolvable tattoo electronic devices. In this paper, we will review the recent advances that transform that vision into a viable technology, with extended capabilities even beyond the early vision. Specifically, we focus on the discussion of a stretchable design for tattoo sensors and degradable materials for dissolvable sensors, in the form of inorganic devices with a performance comparable to modern electronics. Integration of these two technologies as well as the future developments of bio-integrated devices is also discussed. Many of the appealing ideas behind developments of these devices are drawn from nature and especially biological systems. Thus, bio-inspiration is believed to continue playing a key role in future devices for bio-integration and beyond.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
NASA Astrophysics Data System (ADS)
Upadhyaya, A. S.; Bandyopadhyay, P. K.
2012-11-01
In state of art technology, integrated devices are widely used or their potential advantages. Common system reduces weight as well as total space covered by its various parts. In the state of art surveillance system integrated SWIR and night vision system used for more accurate identification of object. In this system a common optical window is used, which passes the radiation of both the regions, further both the spectral regions are separated in two channels. ZnS is a good choice for a common window, as it transmit both the region of interest, night vision (650 - 850 nm) as well as SWIR (0.9 - 1.7 μm). In this work a broad band anti reflection coating is developed on ZnS window to enhance the transmission. This seven layer coating is designed using flip flop design method. After getting the final design, some minor refinement is done, using simplex method. SiO2 and TiO2 coating material combination is used for this work. The coating is fabricated by physical vapour deposition process and the materials were evaporated by electron beam gun. Average transmission of both side coated substrate from 660 to 1700 nm is 95%. This coating also acts as contrast enhancement filter for night vision devices, as it reflect the region of 590 - 660 nm. Several trials have been conducted to check the coating repeatability, and it is observed that transmission variation in different trials is not very much and it is under the tolerance limit. The coating also passes environmental test for stability.
Wolff, J Gerard
2014-01-01
The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.
Automatic Inspection In Car Industry : User Point-Of-View
NASA Astrophysics Data System (ADS)
Salesse, Robert
1986-11-01
Many equipments for automatic inspection with vision have been incorporated in production lines of nearly all car manufacturers. RENAULT also has now a three years of experience with automated vision and some rules have been established. Our most important contributions have been : - Examples of applications, some now operating, some waiting for integration in complete systems. - How to establish a good "request to quote" ? - How to examine and compare suppliers'offers ? What selection criterias and important questions to ask for ? - What can be expected from the new vision equipment and what are the needs in hardware and software.
Humanoids for lunar and planetary surface operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing
2005-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.
Oguntosin, Victoria W; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J; Kawamura, Sadao; Hayashi, Yoshikatsu
2017-01-01
We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments.
Oguntosin, Victoria W.; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J.; Kawamura, Sadao; Hayashi, Yoshikatsu
2017-01-01
We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments. PMID:28736514
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
The impact of loupes and microscopes on vision in endodontics.
Perrin, P; Neuhaus, K W; Lussi, A
2014-05-01
To report on an intraradicular visual test in a simulated clinical setting under different optical conditions. Miniaturized visual tests with E-optotypes (bar distance from 0.01 to 0.05 mm) were fixed inside the root canal system of an extracted maxillary molar at different locations: at the orifice, a depth of 5 mm and the apex. The tooth was mounted in a phantom head for a simulated clinical setting. Unaided vision was compared with Galilean loupes (2.5× magnification) with integrated light source and an operating microscope (6× magnification). The influence of the dentists' age within two groups was evaluated: <40 years (n = 9) and ≥40 years (n = 15). Some younger dentists were able to identify the E-optotypes at the orifice, but otherwise, natural vision did not reveal any measurable result. With Galilean loupes, the younger dentists <40 years could see a 0.05 mm structure at the root canal orifice, in contrast to the older group ≥40 years. Only the microscope allowed the observation of structures inside the root canal, independent of age. Unaided vision and Galilean loupes with an integrated light source could not provide any measurable vision inside the root canal, but younger dentists <40 years could detect with Galilean loupes a canal orifice corresponding to the tip of the smallest endodontic instruments. Dentists over 40 years of age were dependent on the microscope to inspect the root canal system. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Integrated Communications, Navigation and Surveillance Technologies Keynote Address
NASA Technical Reports Server (NTRS)
Lebacqz, J. Victor
2004-01-01
Slides for the Keynote Address present graphics to enhance the discussion of NASA's vision, the National Space Exploration Initiative, current Mars exploration, and aeronautics exploration. The presentation also focuses on development of an Air Transportation System and transformation from present systems.
Handheld pose tracking using vision-inertial sensors with occlusion handling
NASA Astrophysics Data System (ADS)
Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried
2016-07-01
Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.
Gaylord Information Systems: Poised for Its Second Century.
ERIC Educational Resources Information Center
Farley, Charles E., Jr.; And Others
1993-01-01
Describes the development of the GALAXY Integrated Library System by Gaylord Information Systems. Topics addressed include the library automation business; industry trends, both long-term and short-term; a history of Gaylord's automation ventures; Gaylord's vision of the future; and perspectives from two GALAXY users. (LRW)
78 FR 26376 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
...; Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: May 30-31, 2013. Time: 8... of Committee: Integrative, Functional and Cognitive Neuroscience Integrated Review Group..., [email protected] . Name of Committee: Center for Scientific Review Special Emphasis Panel; Vision...
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
Review On Applications Of Neural Network To Computer Vision
NASA Astrophysics Data System (ADS)
Li, Wei; Nasrabadi, Nasser M.
1989-03-01
Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.
A Leadership Perspective on a Shared Vision for Healthcare.
Kitch, Tracy
2017-01-01
Our country's recent negotiations for a new Health Accord have shone light on the importance of more accessible and better home care. The direction being taken on health funding investments has sent a strong message about healthcare system redesign. It is time to design a healthcare system that moves us away from a hospital-focused model to one that is more effective, integrated and sustainable and one that places a greater emphasis on primary care, community care and home care. The authors of the lead paper (Sharkey and Lefebre 2017) provide their vision for people-powered care and explore the opportunity for nursing leaders to draw upon the unique expertise and insights of home care nursing as a strategic lever to bring about real health system transformation across all settings. Understanding what really matters at the beginning of the healthcare journey and honouring the tenants of partnership and empowerment as a universal starting point to optimize health outcomes along the continuum of care present a very important opportunity. However, as nursing leaders in the health system change, it is important that we extend the conversation beyond one setting. It is essential that as leaders, we seek to design models of care delivery that achieve a shared vision, focused on seamless coordinated care across the continuum that is person-centred. Bringing about real system change requires us to think differently and consider the role of nursing across all settings, collaboratively co-designing so that our collective skills and knowledge can work within a complementary framework. Focusing our leadership efforts on enhancing integration across healthcare settings will ensure that nurses can be important leaders and active decision-makers in health system change. A shared vision for healthcare requires all of us to look beyond the usual practices and structures, hospitals and institutional walls.
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang
2013-01-01
In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.
A Vision for the Next Ten Years for Integrated Ocean Observing Data
NASA Astrophysics Data System (ADS)
Willis, Z. S.
2012-12-01
Ocean observing has come a long way since the Ocean Sciences Decadal Committee met over a decade ago. Since then, our use of the ocean and coast and their vast resources has increased substantially - with increased shipping, fishing, offshore energy development and recreational boating. That increased use has also spearheaded advances in observing systems. Cutting-edge autonomous and remotely operated vehicles scour the surface and travel to depths collecting essential biogeochemical data for better managing our marine resources. Satellites enable the global mapping of practically every physical ocean variable imaginable. A nationally-integrated coastal network of high-frequency radars lines the borders of the U.S. feeding critical navigation, response, and environmental information continuously. Federal, academic, and industry communities have joined in unique partnerships at regional, national, and global levels to address common challenges to monitoring our ocean. The 2002 Workshop, Building Consensus: Toward an Integrated and Sustained Ocean Observing System laid the framework for the current United States Integrated Ocean Observing System (U.S. IOOS). Ten years later, U.S. IOOS has moved from concept to reality, though much work remains to meet the nation's ocean observing needs. Today, new research and technologies, evolving users and user requirements, economic and funding challenges, and diverse institutional mandates all influence the future growth and implementation of U.S. IOOS. In light of this new environment, the Interagency Ocean Observation Committee (IOOC) will host the 2012 Integrated Ocean Observing System Summit in November 2012, providing a forum to develop a comprehensive ocean observing vision for the next decade, utilizing the knowledge and expertise gained by the IOOS-wide community over the past ten years. This effort to bring together ocean observing stakeholders at the regional, national, and global levels to address these challenges going forward: - Enhancing information delivery and integration to save lives, enhance the economy and protect the environment - Disseminating seamless information across regional and national boundaries - Harnessing technological innovations for new frontiers and opportunities The anticipated outcomes of the IOOS Summit include a highlight of the past decade of progress towards an integrated system, revisiting and updating user requirements, an assessment of existing observing system capabilities and gaps, identifying integration challenges/opportunities, and, establishing an U.S. IOOS-community-wide vision for the next 10 years of ocean observing. Most important will be the execution of priorities identified before and during the Summit, carrying them forward into a new decade of an enhanced Integrated and Sustained Ocean Observing System.
Honeine, Jean-Louis; Crisafulli, Oscar; Sozzi, Stefania
2015-01-01
We investigated the integration time of haptic and visual input and their interaction during stance stabilization. Eleven subjects performed four tandem-stance conditions (60 trials each). Vision, touch, and both vision and touch were added and withdrawn. Furthermore, vision was replaced with touch and vice versa. Body sway, tibialis anterior, and peroneus longus activity were measured. Following addition or withdrawal of vision or touch, an integration time period elapsed before the earliest changes in sway were observed. Thereafter, sway varied exponentially to a new steady-state while reweighting occurred. Latencies of sway changes on sensory addition ranged from 0.6 to 1.5 s across subjects, consistently longer for touch than vision, and were regularly preceded by changes in muscle activity. Addition of vision and touch simultaneously shortened the latencies with respect to vision or touch separately, suggesting cooperation between sensory modalities. Latencies following withdrawal of vision or touch or both simultaneously were shorter than following addition. When vision was replaced with touch or vice versa, adding one modality did not interfere with the effect of withdrawal of the other, suggesting that integration of withdrawal and addition were performed in parallel. The time course of the reweighting process to reach the new steady-state was also shorter on withdrawal than addition. The effects of different sensory inputs on posture stabilization illustrate the operation of a time-consuming, possibly supraspinal process that integrates and fuses modalities for accurate balance control. This study also shows the facilitatory interaction of visual and haptic inputs in integration and reweighting of stance-stabilizing inputs. PMID:26334013
Library Systems: Current Developments and Future Directions.
ERIC Educational Resources Information Center
Healy, Leigh Watson
This report was commissioned in response to concerns expressed about the gap between institutional digital library initiatives and the products offered by library systems vendors. The study analyzes from the perspective of libraries the strategies, visions, and products that vendors of integrated library systems are offering as solutions. Case…
NASA Astrophysics Data System (ADS)
Holasek, R. E.; Nakanishi, K.; Swartz, B.; Zacaroli, R.; Hill, B.; Naungayan, J.; Herwitz, S.; Kavros, P.; English, D. C.
2013-12-01
As part of the NASA ROSES program, the NovaSol Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK) was flown as the payload on the unmanned Vision II helicopter. The goal of the May 2013 data collection was to obtain high resolution visible and near-infrared (visNIR) hyperspectral data of seagrasses and coral reefs in the Florida Keys. The specifications of the SHARK hyperspectral system and the Vision II turbine rotorcraft will be described along with the process of integrating the payload to the vehicle platform. The minimal size, weight, and power (SWaP) specifications of the SHARK system is an ideal match to the Vision II helicopter and its flight parameters. One advantage of the helicopter over fixed wing platforms is its inherent ability to take off and land in a limited area and without a runway, enabling the UAV to be located in close proximity to the experiment areas and the science team. Decisions regarding integration times, waypoint selection, mission duration, and mission frequency are able to be based upon the local environmental conditions and can be modified just prior to take off. The operational procedures and coordination between the UAV pilot, payload operator, and scientist will be described. The SHARK system includes an inertial navigation system and digital elevation model (DEM) which allows image coordinates to be calculated onboard the aircraft in real-time. Examples of the geo-registered images from the data collection will be shown. SHARK mounted below VTUAV. SHARK deployed on VTUAV over water.
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
Bristol, R. Sky; Euliss, Ned H.; Booth, Nathaniel L.; Burkardt, Nina; Diffendorfer, Jay E.; Gesch, Dean B.; McCallum, Brian E.; Miller, David M.; Morman, Suzette A.; Poore, Barbara S.; Signell, Richard P.; Viger, Roland J.
2013-01-01
Core Science Systems is a new mission of the U.S. Geological Survey (USGS) that resulted from the 2007 Science Strategy, "Facing Tomorrow's Challenges: U.S. Geological Survey Science in the Decade 2007-2017." This report describes the Core Science Systems vision and outlines a strategy to facilitate integrated characterization and understanding of the complex Earth system. The vision and suggested actions are bold and far-reaching, describing a conceptual model and framework to enhance the ability of the USGS to bring its core strengths to bear on pressing societal problems through data integration and scientific synthesis across the breadth of science. The context of this report is inspired by a direction set forth in the 2007 Science Strategy. Specifically, ecosystem-based approaches provide the underpinnings for essentially all science themes that define the USGS. Every point on Earth falls within a specific ecosystem where data, other information assets, and the expertise of USGS and its many partners can be employed to quantitatively understand how that ecosystem functions and how it responds to natural and anthropogenic disturbances. Every benefit society obtains from the planet-food, water, raw materials to build infrastructure, homes and automobiles, fuel to heat homes and cities, and many others, are derived from or affect ecosystems. The vision for Core Science Systems builds on core strengths of the USGS in characterizing and understanding complex Earth and biological systems through research, modeling, mapping, and the production of high quality data on the Nation's natural resource infrastructure. Together, these research activities provide a foundation for ecosystem-based approaches through geologic mapping, topographic mapping, and biodiversity mapping. The vision describes a framework founded on these core mapping strengths that makes it easier for USGS scientists to discover critical information, share and publish results, and identify potential collaborations that transcend all USGS missions. The framework is designed to improve the efficiency of scientific work within USGS by establishing a means to preserve and recall data for future applications, organizing existing scientific knowledge and data to facilitate new use of older information, and establishing a future workflow that naturally integrates new data, applications, and other science products to make interdisciplinary research easier and more efficient. Given the increasing need for integrated data and interdisciplinary approaches to solve modern problems, leadership by the Core Science Systems mission will facilitate problem solving by all USGS missions in ways not formerly possible. The report lays out a strategy to achieve this vision through three goals with accompanying objectives and actions. The first goal builds on and enhances the strengths of the Core Science Systems mission in characterizing and understanding the Earth system from the geologic framework to the topographic characteristics of the land surface and biodiversity across the Nation. The second goal enhances and develops new strengths in computer and information science to make it easier for USGS scientists to discover data and models, share and publish results, and discover connections between scientific information and knowledge. The third goal brings additional focus to research and development methods to address complex issues affecting society that require integration of knowledge and new methods for synthesizing scientific information. Collectively, the report lays out a strategy to create a seamless connection between all USGS activities to accelerate and make USGS science more efficient by fully integrating disciplinary expertise within a new and evolving science paradigm for a changing world in the 21st century.
Science strategy for Core Science Systems in the U.S. Geological Survey, 2013-2023
Bristol, R. Sky; Euliss, Ned H.; Booth, Nathaniel L.; Burkardt, Nina; Diffendorfer, Jay E.; Gesch, Dean B.; McCallum, Brian E.; Miller, David M.; Morman, Suzette A.; Poore, Barbara S.; Signell, Richard P.; Viger, Roland J.
2012-01-01
Core Science Systems is a new mission of the U.S. Geological Survey (USGS) that grew out of the 2007 Science Strategy, “Facing Tomorrow’s Challenges: U.S. Geological Survey Science in the Decade 2007–2017.” This report describes the vision for this USGS mission and outlines a strategy for Core Science Systems to facilitate integrated characterization and understanding of the complex earth system. The vision and suggested actions are bold and far-reaching, describing a conceptual model and framework to enhance the ability of USGS to bring its core strengths to bear on pressing societal problems through data integration and scientific synthesis across the breadth of science.The context of this report is inspired by a direction set forth in the 2007 Science Strategy. Specifically, ecosystem-based approaches provide the underpinnings for essentially all science themes that define the USGS. Every point on earth falls within a specific ecosystem where data, other information assets, and the expertise of USGS and its many partners can be employed to quantitatively understand how that ecosystem functions and how it responds to natural and anthropogenic disturbances. Every benefit society obtains from the planet—food, water, raw materials to build infrastructure, homes and automobiles, fuel to heat homes and cities, and many others, are derived from or effect ecosystems.The vision for Core Science Systems builds on core strengths of the USGS in characterizing and understanding complex earth and biological systems through research, modeling, mapping, and the production of high quality data on the nation’s natural resource infrastructure. Together, these research activities provide a foundation for ecosystem-based approaches through geologic mapping, topographic mapping, and biodiversity mapping. The vision describes a framework founded on these core mapping strengths that makes it easier for USGS scientists to discover critical information, share and publish results, and identify potential collaborations that transcend all USGS missions. The framework is designed to improve the efficiency of scientific work within USGS by establishing a means to preserve and recall data for future applications, organizing existing scientific knowledge and data to facilitate new use of older information, and establishing a future workflow that naturally integrates new data, applications, and other science products to make it easier and more efficient to conduct interdisciplinary research over time. Given the increasing need for integrated data and interdisciplinary approaches to solve modern problems, leadership by the Core Science Systems mission will facilitate problem solving by all USGS missions in ways not formerly possible.The report lays out a strategy to achieve this vision through three goals with accompanying objectives and actions. The first goal builds on and enhances the strengths of the Core Science Systems mission in characterizing and understanding the earth system from the geologic framework to the topographic characteristics of the land surface and biodiversity across the nation. The second goal enhances and develops new strengths in computer and information science to make it easier for USGS scientists to discover data and models, share and publish results, and discover connections between scientific information and knowledge. The third goal brings additional focus to research and development methods to address complex issues affecting society that require integration of knowledge and new methods for synthesizing scientific information. Collectively, the report lays out a strategy to create a seamless connection between all USGS activities to accelerate and make USGS science more efficient by fully integrating disciplinary expertise within a new and evolving science paradigm for a changing world in the 21st century.
An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.
Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin
2015-08-01
This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.
NASA Technical Reports Server (NTRS)
Ponchak, Denise (Compiler)
2006-01-01
The Integrated Communications, Navigation and Surveillance (ICNS) Technologies Conference and Workshop provides a forum for government, industry, and academic communities performing research and technology development for advanced digital communications, navigation, and surveillance security systems and associated applications supporting the national and global air transportation systems. The event s goals are to understand current efforts and recent results in near- and far-term research and technology demonstration; identify integrated digital communications, navigation and surveillance research requirements necessary for a safe, high-capacity, advanced air transportation system; foster collaboration and coordination among all stakeholders; and discuss critical issues and develop recommendations to achieve the future integrated CNS vision for the national and global air transportation system.
NASA Technical Reports Server (NTRS)
Fujikawa, Gene (Compiler)
2004-01-01
The Integrated Communications, Navigational and Surveillance (ICNS) Technologies Conference and Workshop provides a forum for Government, industry, and academic communities performing research and technology development for advanced digital communications, navigation, and surveillance security systems and associated applications supporting the national and global air transportation systems. The event's goals are to understand current efforts and recent results in near-and far-term research and technology demonstration; identify integrated digital communications, navigation and surveillance research requirements necessary for a safe, high-capacity, advanced air transportation system; foster collaboration and coordination among all stakeholders; and discuss critical issues and develop recommendations to achieve the future integrated CNS vision for the national and global air transportation system.
NASA Astrophysics Data System (ADS)
Kutsch, W. L.
2015-12-01
Environmental research infrastructures and big data integration networks require common data policies, standardized workflows and sophisticated e-infrastructure to optimise the data life cycle. This presentation summarizes the experiences in developing the data life cycle for the Integrated Carbon Observation System (ICOS), a European Research Infrastructure. It will also outline challenges that still exist and visions for future development. As many other environmental research infrastructures ICOS RI built on a large number of distributed observational or experimental sites. Data from these sites are transferred to Thematic Centres and quality checked, processed and integrated there. Dissemination will be managed by the ICOS Carbon Portal. This complex data life cycle has been defined in detail by developing protocols and assigning responsibilities. Since data will be shared under an open access policy there is a strong need for common data citation tracking systems that allow data providers to identify downstream usage of their data so as to prove their importance and show the impact to stakeholders and the public. More challenges arise from interoperating with other infrastructures or providing data for global integration projects as done e.g. in the framework of GEOSS or in global integration approaches such as fluxnet or SOCAt. Here, common metadata systems are the key solutions for data detection and harvesting. The metadata characterises data, services, users and ICT resources (including sensors and detectors). Risks may arise when data of high and low quality are mixed during this process or unexperienced data scientists without detailed knowledge on the data aquisition derive scientific theories through statistical analyses. The vision of fully open data availability is expressed in a recent GEO flagship initiative that will address important issues needed to build a connected and interoperable global network for carbon cycle and greenhouse gas observations and aims to meet the most urgent needs for integration between different information sources and methodologies, between different regional networks and from data providers to users.
Translating Vision into Design: A Method for Conceptual Design Development
NASA Technical Reports Server (NTRS)
Carpenter, Joyce E.
2003-01-01
One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.
Humanoids in Support of Lunar and Planetary Surface Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier
2006-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.
Blood Glucose Meters and Accessibility to Blind and Visually Impaired People
Burton, Darren M.; Enigk, Matthew G.; Lilly, John W.
2012-01-01
In 2007, five blood glucose meters (BGMs) were introduced with integrated speech output necessary for use by persons with vision loss. One of those five meters had fully integrated speech output, allowing a person with vision loss independence in accessing all features and functions of the meter. In comparison, 13 BGMs with integrated speech output were available in 2011. Accessibility attributes of these 11 meters were tabulated and product design features examined. All 13 meters were found to be usable by persons with vision loss to obtain a blood glucose measurement. However, only 4 of them featured the fully integrated speech output necessary for a person with vision loss to access all features and functions independently. PMID:22538131
Blood glucose meters and accessibility to blind and visually impaired people.
Burton, Darren M; Enigk, Matthew G; Lilly, John W
2012-03-01
In 2007, five blood glucose meters (BGMs) were introduced with integrated speech output necessary for use by persons with vision loss. One of those five meters had fully integrated speech output, allowing a person with vision loss independence in accessing all features and functions of the meter. In comparison, 13 BGMs with integrated speech output were available in 2011. Accessibility attributes of these 11 meters were tabulated and product design features examined. All 13 meters were found to be usable by persons with vision loss to obtain a blood glucose measurement. However, only 4 of them featured the fully integrated speech output necessary for a person with vision loss to access all features and functions independently. © 2012 Diabetes Technology Society.
Wysham, Nicholas G; Abernethy, Amy P; Cox, Christopher E
2014-10-01
Prediction models in critical illness are generally limited to short-term mortality and uncommonly include patient-centered outcomes. Current outcome prediction tools are also insensitive to individual context or evolution in healthcare practice, potentially limiting their value over time. Improved prognostication of patient-centered outcomes in critical illness could enhance decision-making quality in the ICU. Patient-reported outcomes have emerged as precise methodological measures of patient-centered variables and have been successfully employed using diverse platforms and technologies, enhancing the value of research in critical illness survivorship and in direct patient care. The learning health system is an emerging ideal characterized by integration of multiple data sources into a smart and interconnected health information technology infrastructure with the goal of rapidly optimizing patient care. We propose a vision of a smart, interconnected learning health system with integrated electronic patient-reported outcomes to optimize patient-centered care, including critical care outcome prediction. A learning health system infrastructure integrating electronic patient-reported outcomes may aid in the management of critical illness-associated conditions and yield tools to improve prognostication of patient-centered outcomes in critical illness.
Integrated Evaluation of Closed Loop Air Revitalization System Components
NASA Technical Reports Server (NTRS)
Murdock, K.
2010-01-01
NASA s vision and mission statements include an emphasis on human exploration of space, which requires environmental control and life support technologies. This Contractor Report (CR) describes the development and evaluation of an Air Revitalization System, modeling and simulation of the components, and integrated hardware testing with the goal of better understanding the inherent capabilities and limitations of this closed loop system. Major components integrated and tested included a 4-Bed Modular Sieve, Mechanical Compressor Engineering Development Unit, Temperature Swing Adsorption Compressor, and a Sabatier Engineering and Development Unit. The requisite methodolgy and technical results are contained in this CR.
Adaptive Feedback in Local Coordinates for Real-time Vision-Based Motion Control Over Long Distances
NASA Astrophysics Data System (ADS)
Aref, M. M.; Astola, P.; Vihonen, J.; Tabus, I.; Ghabcheloo, R.; Mattila, J.
2018-03-01
We studied the differences in noise-effects, depth-correlated behavior of sensors, and errors caused by mapping between coordinate systems in robotic applications of machine vision. In particular, the highly range-dependent noise densities for semi-unknown object detection were considered. An equation is proposed to adapt estimation rules to dramatic changes of noise over longer distances. This algorithm also benefits the smooth feedback of wheels to overcome variable latencies of visual perception feedback. Experimental evaluation of the integrated system is presented with/without the algorithm to highlight its effectiveness.
Integrated Unmanned Air-Ground Robotics System, Volume 4
2001-08-20
3) IPT Integrated Product Team IRP Intermediate Power Rating JAUGS TBD JCDL TBD Joint Vision 2020 TBD Km Kilometer lbs. pounds MAE Mechanical and...compatible with emerging JCDL and/or JAUGS . 2.3.2.2. Payload must be “plug and play.” 2.3.3. Communications 2.3.3.1. System communications shall be robust...Power JCDL JAUGS Joint Architecture for Unmanned Ground Systems JP-8 Jet Propulsion Fuel 8 km Kilometer lbs. Pounds LOS Line Of Sight MAE Mechanical
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
NASA Astrophysics Data System (ADS)
Stetson, Suzanne; Weber, Hadley; Crosby, Frank J.; Tinsley, Kenneth; Kloess, Edmund; Nevis, Andrew J.; Holloway, John H., Jr.; Witherspoon, Ned H.
2004-09-01
The Airborne Littoral Reconnaissance Technologies (ALRT) project has developed and tested a nighttime operational minefield detection capability using commercial off-the-shelf high-power Laser Diode Arrays (LDAs). The Coastal System Station"s ALRT project, under funding from the Office of Naval Research (ONR), has been designing, developing, integrating, and testing commercial arrays using a Cessna airborne platform over the last several years. This has led to the development of the Airborne Laser Diode Array Illuminator wide field-of-view (ALDAI-W) imaging test bed system. The ALRT project tested ALDAI-W at the Army"s Night Vision Lab"s Airborne Mine Detection Arid Test. By participating in Night Vision"s test, ALRT was able to collect initial prototype nighttime operational data using ALDAI-W, showing impressive results and pioneering the way for final test bed demonstration conducted in September 2003. This paper describes the ALDAI-W Arid Test and results, along with processing steps used to generate imagery.
Square tracking sensor for autonomous helicopter hover stabilization
NASA Astrophysics Data System (ADS)
Oertel, Carl-Henrik
1995-06-01
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.
Vision Problems and Reduced Reading Outcomes in Queensland Schoolchildren.
Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M
2017-03-01
To assess the relationship between vision and reading outcomes in Indigenous and non-Indigenous schoolchildren to determine whether vision problems are associated with lower reading outcomes in these populations. Vision testing and reading assessments were performed on 508 Indigenous and non-Indigenous schoolchildren in Queensland, Australia divided into two age groups: Grades 1 and 2 (6-7 years of age) and Grades 6 and 7 (12-13 years of age). Vision parameters measured included cycloplegic refraction, near point of convergence, heterophoria, fusional vergence range, rapid automatized naming, and visual motor integration. The following vision conditions were then classified based on the vision findings: uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration. Reading accuracy and reading comprehension were measured with the Neale reading test. The effect of uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration on reading accuracy and reading comprehension were investigated with ANCOVAs. The ANCOVAs explained a significant proportion of variance in both reading accuracy and reading comprehension scores in both age groups, with 40% of the variation in reading accuracy and 33% of the variation in reading comprehension explained in the younger age group, and 27% and 10% of the variation in reading accuracy and reading comprehension, respectively, in the older age group. The vision parameters of visual motor integration and rapid automatized naming were significant predictors in all ANCOVAs (P < .01). The direction of the relationship was such that reduced reading results were explained by reduced visual motor integration and rapid automatized naming results. Both reduced rapid automatized naming and visual motor integration were associated with poorer reading outcomes in Indigenous and non-Indigenous children. This is an important finding given the recent emphasis placed on Indigenous children's reading skills and the fact that reduced rapid automatized naming and visual motor integration skills are more common in this group.
The Research Path to the Virtual Class. ZIFF Papiere 105.
ERIC Educational Resources Information Center
Rajasingham, Lalita
This paper describes a project conducted in 1991-92, based on research conducted in 1986-87 that demonstrated the need for a telecommunications system with the capacity of integrated services digital networks (ISDN) that would allow for sound, vision, and integrated computer services. Called the Tri-Centre Project, it set out to explore, from the…
New approach for teaching health promotion in the community: integration of three nursing courses.
Moshe-Eilon, Yael; Shemy, Galia
2003-07-01
The complexity of the health care system and its interdisciplinary nature require that each component of the system redefine its professional framework, relative advantage, and unique contribution as an independent discipline. In choosing the most efficient and cost-effective work-force, each profession in the health care system must clarify its importance and contribution, otherwise functions will overlap and financial resources will be wasted. As rapid and wide-ranging changes occur in the health care system, the nursing profession must display a new and comprehensive vision that projects its values, beliefs, and relationships with and commitment to both patients and coworkers. The plans to fulfill this vision must be described clearly. This article presents part of a new professional paradigm developed by the nursing department of the University of Haifa, Israel. Three main topics are addressed: The building blocks of the new vision (i.e., community and health promotion, managerial skills, academic research). Integration of the building blocks into the 4-year baccalaureate degree program (i.e., how to practice health promotion with students in the community setting; managerial nursing skills at the baccalaureate level, including which to choose and to what depth and how to teach them; and academic nursing research, including the best way to teach basic research skills and implement them via a community project). Two senior student projects, demonstrating practical linking of the building blocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth Thomas
2012-02-01
Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970's vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performancemore » improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE's program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: (1) Highly integrated control rooms; (2) Highly automated plant; (3) Integrated operations; (4) Human performance improvement for field workers; and (5) Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth Thomas; Bruce Hallbert
2013-02-01
Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970’s vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performancemore » improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE’s program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: 1. Highly integrated control rooms 2. Highly automated plant 3. Integrated operations 4. Human performance improvement for field workers 5. Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.« less
Plutonium immobilization can loading FY99 component test report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.
2000-06-01
This report summarizes FY99 Can Loading work completed for the Plutonium Immobilization Project and it includes details about the Helium hood, cold pour cans, Can Loading robot, vision system, magnetically coupled ray cart and lifts, system integration, Can Loading glovebox layout, and an FY99 cost table.
NASA Technical Reports Server (NTRS)
Kearney, Lara
2004-01-01
In January 2004, the President announced a new Vision for Space Exploration. NASA's Office of Exploration Systems has identified Extravehicular Activity (EVA) as a critical capability for supporting the Vision for Space Exploration. EVA is required for all phases of the Vision, both in-space and planetary. Supporting the human outside the protective environment of the vehicle or habitat and allow ing him/her to perform efficient and effective work requires an integrated EVA "System of systems." The EVA System includes EVA suits, airlocks, tools and mobility aids, and human rovers. At the core of the EVA System is the highly technical EVA suit, which is comprised mainly of a life support system and a pressure/environmental protection garment. The EVA suit, in essence, is a miniature spacecraft, which combines together many different sub-systems such as life support, power, communications, avionics, robotics, pressure systems and thermal systems, into a single autonomous unit. Development of a new EVA suit requires technology advancements similar to those required in the development of a new space vehicle. A majority of the technologies necessary to develop advanced EVA systems are currently at a low Technology Readiness Level of 1-3. This is particularly true for the long-pole technologies of the life support system.
An Rx for 20/20 Vision: Vision Planning and Education.
ERIC Educational Resources Information Center
Chrisman, Gerald J.; Holliday, Clifford R.
1996-01-01
Discusses the Dallas Independent School District's decision to adopt an integrated technology infrastructure and the importance of vision planning for long term goals. Outlines the vision planning process: first draft; environmental projection; restatement of vision in terms of market projections, anticipated customer needs, suspected competitor…
Navarro, Pedro J.; Fernández, Carlos; Weiss, Julia; Egea-Cortines, Marcos
2012-01-01
Plant development is the result of an endogenous morphogenetic program that integrates environmental signals. The so-called circadian clock is a set of genes that integrates environmental inputs into an internal pacing system that gates growth and other outputs. Study of circadian growth responses requires high sampling rates to detect changes in growth and avoid aliasing. We have developed a flexible configurable growth chamber comprising a computer vision system that allows sampling rates ranging between one image per 30 s to hours/days. The vision system has a controlled illumination system, which allows the user to set up different configurations. The illumination system used emits a combination of wavelengths ensuring the optimal growth of species under analysis. In order to obtain high contrast of captured images, the capture system is composed of two CCD cameras, for day and night periods. Depending on the sample type, a flexible image processing software calculates different parameters based on geometric calculations. As a proof of concept we tested the system in three different plant tissues, growth of petunia- and snapdragon (Antirrhinum majus) flowers and of cladodes from the cactus Opuntia ficus-indica. We found that petunia flowers grow at a steady pace and display a strong growth increase in the early morning, whereas Opuntia cladode growth turned out not to follow a circadian growth pattern under the growth conditions imposed. Furthermore we were able to identify a decoupling of increase in area and length indicating that two independent growth processes are responsible for the final size and shape of the cladode. PMID:23202214
Visions of the Future: Hybrid Electric Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Bowman, Cheryl L.
2016-01-01
The National Aeronautics and Space Administration (NASA) is investing continually in improving civil aviation. Hybridization of aircraft propulsion is one aspect of a technology suite which will transform future aircraft. In this context, hybrid propulsion is considered a combination of traditional gas turbine propulsion and electric drive enabled propulsion. This technology suite includes elements of propulsion and airframe integration, parallel hybrid shaft power, turbo-electric generation, electric drive systems, component development, materials development and system integration at multiple levels.
The role of differential delays in integrating transient visual and proprioceptive information
Cameron, Brendan D.; de la Malla, Cristina; López-Moliner, Joan
2014-01-01
Many actions involve limb movements toward a target. Visual and proprioceptive estimates are available online, and by optimally combining (Ernst and Banks, 2002) both modalities during the movement, the system can increase the precision of the hand estimate. The notion that both sensory modalities are integrated is also motivated by the intuition that we do not consciously perceive any discrepancy between the felt and seen hand's positions. This coherence as a result of integration does not necessarily imply realignment between the two modalities (Smeets et al., 2006). For example, the two estimates (visual and proprioceptive) might be different without either of them (e.g., proprioception) ever being adjusted after recovering the other (e.g., vision). The implication that the felt and seen positions might be different has a temporal analog. Because the actual feedback from the hand at a given instantaneous position reaches brain areas at different times for proprioception and vision (shorter for proprioception), the corresponding instantaneous unisensory position estimates will be different, with the proprioceptive one being ahead of the visual one. Based on the assumption that the system integrates optimally and online the available evidence from both senses, we introduce a temporal mechanism that explains the reported overestimation of hand positions when vision is occluded for active and passive movements (Gritsenko et al., 2007) without the need to resort to initial feedforward estimates (Wolpert et al., 1995). We set up hypotheses to test the validity of the model, and we contrast simulation-based predictions with empirical data. PMID:24550870
ERIC Educational Resources Information Center
Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.
2015-01-01
Introduction: The BlindAid, a virtual system developed for orientation and mobility (O&M) training of people who are blind or have low vision, allows interaction with different virtual components (structures and objects) via auditory and haptic feedback. This research examined if and how the BlindAid that was integrated within an O&M…
NASA Technical Reports Server (NTRS)
Young, Steve; UijtdeHaag, Maarten; Sayre, Jonathon
2003-01-01
Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data representing terrain, obstacles, and cultural features. As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. Further, updates to the databases may not be provided as changes occur. These issues limit the certification level and constrain the operational context of SVS for civil aviation. Previous work demonstrated the feasibility of using a realtime monitor to bound the integrity of Digital Elevation Models (DEMs) by using radar altimeter measurements during flight. This paper describes an extension of this concept to include X-band Weather Radar (WxR) measurements. This enables the monitor to detect additional classes of DEM errors and to reduce the exposure time associated with integrity threats. Feature extraction techniques are used along with a statistical assessment of similarity measures between the sensed and stored features that are detected. Recent flight-testing in the area around the Juneau, Alaska Airport (JNU) has resulted in a comprehensive set of sensor data that is being used to assess the feasibility of the proposed monitor technology. Initial results of this assessment are presented.
Combining path integration and remembered landmarks when navigating without vision.
Kalia, Amy A; Schrater, Paul R; Legge, Gordon E
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.
Combining Path Integration and Remembered Landmarks When Navigating without Vision
Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.
2013-01-01
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742
Computer hardware and software for robotic control
NASA Technical Reports Server (NTRS)
Davis, Virgil Leon
1987-01-01
The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
A vision for an ultra-high resolution integrated water cycle observation and prediction system
NASA Astrophysics Data System (ADS)
Houser, P. R.
2013-05-01
Society's welfare, progress, and sustainable economic growth—and life itself—depend on the abundance and vigorous cycling and replenishing of water throughout the global environment. The water cycle operates on a continuum of time and space scales and exchanges large amounts of energy as water undergoes phase changes and is moved from one part of the Earth system to another. We must move toward an integrated observation and prediction paradigm that addresses broad local-to-global science and application issues by realizing synergies associated with multiple, coordinated observations and prediction systems. A central challenge of a future water and energy cycle observation strategy is to progress from single variable water-cycle instruments to multivariable integrated instruments in electromagnetic-band families. The microwave range in the electromagnetic spectrum is ideally suited for sensing the state and abundance of water because of water's dielectric properties. Eventually, a dedicated high-resolution water-cycle microwave-based satellite mission may be possible based on large-aperture antenna technology that can harvest the synergy that would be afforded by simultaneous multichannel active and passive microwave measurements. A partial demonstration of these ideas can even be realized with existing microwave satellite observations to support advanced multivariate retrieval methods that can exploit the totality of the microwave spectral information. The simultaneous multichannel active and passive microwave retrieval would allow improved-accuracy retrievals that are not possible with isolated measurements. Furthermore, the simultaneous monitoring of several of the land, atmospheric, oceanic, and cryospheric states brings synergies that will substantially enhance understanding of the global water and energy cycle as a system. The multichannel approach also affords advantages to some constituent retrievals—for instance, simultaneous retrieval of vegetation biomass would improve soil-moisture retrieval by avoiding the need for auxiliary vegetation information. This multivariable water-cycle observation system must be integrated with high-resolution, application relevant prediction systems to optimize their information content and utility is addressing critical water cycle issues. One such vision is a real-time ultra-high resolution locally-moasiced global land modeling and assimilation system, that overlays regional high-fidelity information over a baseline global land prediction system. Such a system would provide the best possible local information for use in applications, while integrating and sharing information globally for diagnosing larger water cycle variability. In a sense, this would constitute a hydrologic telecommunication system, where the best local in-situ gage, Doppler radar, and weather station can be shared internationally, and integrated in a consistent manner with global observation platforms like the multivariable water cycle mission. To realize such a vision, large issues must be addressed, such as international data sharing policy, model-observation integration approaches that maintain local extremes while achieving global consistency, and methods for establishing error estimates and uncertainty.
Gesture therapy: a vision-based system for upper extremity stroke rehabilitation.
Sucar, L; Luis, Roger; Leder, Ron; Hernandez, Jorge; Sanchez, Israel
2010-01-01
Stroke is the main cause of motor and cognitive disabilities requiring therapy in the world. Therefor it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. We have developed a low-cost vision-based system that allows stroke survivors to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a virtual environment for facilitating repetitive movement training, with computer vision algorithms that track the hand of a patient, using an inexpensive camera and a personal computer. This system, called Gesture Therapy, includes a gripper with a pressure sensor to include hand and finger rehabilitation; and it tracks the head of the patient to detect and avoid trunk compensation. It has been evaluated in a controlled clinical trial at the National Institute for Neurology and Neurosurgery in Mexico City, comparing it with conventional occupational therapy. In this paper we describe the latest version of the Gesture Therapy System and summarize the results of the clinical trail.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Shelton, Kevin J.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.; Norman, Rober M.; Ellis, Kyle K. E.; Barmore, Bryan E.
2011-01-01
An emerging Next Generation Air Transportation System concept - Equivalent Visual Operations (EVO) - can be achieved using an electronic means to provide sufficient visibility of the external world and other required flight references on flight deck displays that enable the safety, operational tempos, and visual flight rules (VFR)-like procedures for all weather conditions. Synthetic and enhanced flight vision system technologies are critical enabling technologies to EVO. Current research evaluated concepts for flight deck-based interval management (FIM) operations, integrated with Synthetic Vision and Enhanced Vision flight-deck displays and technologies. One concept involves delegated flight deck-based separation, in which the flight crews were paired with another aircraft and responsible for spacing and maintaining separation from the paired aircraft, termed, "equivalent visual separation." The operation required the flight crews to acquire and maintain an "equivalent visual contact" as well as to conduct manual landings in low-visibility conditions. The paper describes results that evaluated the concept of EVO delegated separation, including an off-nominal scenario in which the lead aircraft was not able to conform to the assigned spacing resulting in a loss of separation.
Evaluation Action Plan for the Texas Workforce Development System. Revised.
ERIC Educational Resources Information Center
King, Christopher T.; McPherson, Robert E.
Texas is shifting to an integrated, systems-oriented approach to providing work force services for its residents and employers in which all services are guided by a single mission and vision. Implementation strategies are clearly focused on achieving common results. Accountability means being able to ensure taxpayers, residents, employers, and…
Computer vision for driver assistance systems
NASA Astrophysics Data System (ADS)
Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner
1998-07-01
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Computer vision-based classification of hand grip variations in neurorehabilitation.
Zariffa, José; Steeves, John D
2011-01-01
The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
A multiscale Markov random field model in wavelet domain for image segmentation
NASA Astrophysics Data System (ADS)
Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan
2017-07-01
The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.
Risk assessment of integrated electronic health records.
Bjornsson, Bjarni Thor; Sigurdardottir, Gudlaug; Stefansson, Stefan Orri
2010-01-01
The paper describes the security concerns related to Electronic Health Records (EHR) both in registration of data and integration of systems. A description of the current state of EHR systems in Iceland is provided, along with the Ministry of Health's future vision and plans. New legislation provides the opportunity for increased integration of EHRs and further collaboration between institutions. Integration of systems, along with greater availability and access to EHR data, requires increased security awareness since additional risks are introduced. The paper describes the core principles of information security as it applies to EHR systems and data. The concepts of confidentiality, integrity, availability, accountability and traceability are introduced and described. The paper discusses the legal requirements and importance of performing risk assessment for EHR data. Risk assessment methodology according to the ISO/IEC 27001 information security standard is described with examples on how it is applied to EHR systems.
Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin
2015-04-22
Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.
Labhart, T; Petzold, J; Helbling, H
2001-07-01
Many insects exploit the polarization pattern of the sky for compass orientation in navigation or cruising-course control. Polarization-sensitive neurones (POL1-neurones) in the polarization vision pathway of the cricket visual system have wide visual fields of approximately 60 degrees diameter, i.e. these neurones integrate information over a large area of the sky. This results from two different mechanisms. (i) Optical integration; polarization vision is mediated by a group of specialized ommatidia at the dorsal rim of the eye. These ommatidia lack screening pigment, contain a wide rhabdom and have poor lens optics. As a result, the angular sensitivity of the polarization-sensitive photoreceptors is very wide (median approximately 20 degrees ). (ii) Neural integration; each POL1-neurone receives input from a large number of dorsal rim photoreceptors with diverging optical axes. Spatial integration in POL1-neurones acts as a spatial low-pass filter. It improves the quality of the celestial polarization signal by filtering out cloud-induced local disturbances in the polarization pattern and increases sensitivity.
What aspects of vision facilitate haptic processing?
Millar, Susanna; Al-Attar, Zainab
2005-12-01
We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and total blindness. Results for target locations differed, suggesting additional effects from adjacent touch cues. These are discussed. Touch with full vision was most accurate, as expected. Peripheral and tunnel vision, which reduce visuo-spatial cues, differed in error pattern. Both were less accurate than full vision, and significantly more accurate than touch with diffuse light perception, and touch alone. The important finding was that touch with diffuse light perception, which excludes spatial cues, did not differ from touch without vision in performance accuracy, nor in location error pattern. The contrast between spatially relevant versus spatially irrelevant vision provides new, rather decisive, evidence against the hypothesis that vision affects haptic processing even if it does not add task-relevant information. The results support optimal integration theories, and suggest that spatial and non-spatial aspects of vision need explicit distinction in bimodal studies and theories of spatial integration.
Computer vision challenges and technologies for agile manufacturing
NASA Astrophysics Data System (ADS)
Molley, Perry A.
1996-02-01
Sandia National Laboratories, a Department of Energy laboratory, is responsible for maintaining the safety, security, reliability, and availability of the nuclear weapons stockpile for the United States. Because of the changing national and global political climates and inevitable budget cuts, Sandia is changing the methods and processes it has traditionally used in the product realization cycle for weapon components. Because of the increasing age of the nuclear stockpile, it is certain that the reliability of these weapons will degrade with time unless eventual action is taken to repair, requalify, or renew them. Furthermore, due to the downsizing of the DOE weapons production sites and loss of technical personnel, the new product realization process is being focused on developing and deploying advanced automation technologies in order to maintain the capability for producing new components. The goal of Sandia's technology development program is to create a product realization environment that is cost effective, has improved quality and reduced cycle time for small lot sizes. The new environment will rely less on the expertise of humans and more on intelligent systems and automation to perform the production processes. The systems will be robust in order to provide maximum flexibility and responsiveness for rapidly changing component or product mixes. An integrated enterprise will allow ready access to and use of information for effective and efficient product and process design. Concurrent engineering methods will allow a speedup of the product realization cycle, reduce costs, and dramatically lessen the dependency on creating and testing physical prototypes. Virtual manufacturing will allow production processes to be designed, integrated, and programed off-line before a piece of hardware ever moves. The overriding goal is to be able to build a large variety of new weapons parts on short notice. Many of these technologies that are being developed are also applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.
Colour, vision and coevolution in avian brood parasitism.
Stoddard, Mary Caswell; Hauber, Mark E
2017-07-05
The coevolutionary interactions between avian brood parasites and their hosts provide a powerful system for investigating the diversity of animal coloration. Specifically, reciprocal selection pressure applied by hosts and brood parasites can give rise to novel forms and functions of animal coloration, which largely differ from those that arise when selection is imposed by predators or mates. In the study of animal colours, avian brood parasite-host dynamics therefore invite special consideration. Rapid advances across disciplines have paved the way for an integrative study of colour and vision in brood parasite-host systems. We now know that visually driven host defences and host life history have selected for a suite of phenotypic adaptations in parasites, including mimicry, crypsis and supernormal stimuli. This sometimes leads to vision-based host counter-adaptations and increased parasite trickery. Here, we review vision-based adaptations that arise in parasite-host interactions, emphasizing that these adaptations can be visual/sensory, cognitive or phenotypic in nature. We highlight recent breakthroughs in chemistry, genomics, neuroscience and computer vision, and we conclude by identifying important future directions. Moving forward, it will be essential to identify the genetic and neural bases of adaptation and to compare vision-based adaptations to those arising in other sensory modalities.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
NASA Astrophysics Data System (ADS)
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
Development of a micromachined epiretinal vision prosthesis
NASA Astrophysics Data System (ADS)
Stieglitz, Thomas
2009-12-01
Microsystems engineering offers the tools to develop highly sophisticated miniaturized implants to interface with the nervous system. One challenging application field is the development of neural prostheses to restore vision in persons that have become blind by photoreceptor degeneration due to retinitis pigmentosa. The fundamental work that has been done in one approach is presented here. An epiretinal vision prosthesis has been developed that allows hybrid integration of electronics on one part of a thin and flexible substrate. Polyimide as a substrate material is proven to be non-cytotoxic. Non-hermetic encapsulation with parylene C was stable for at least 3 months in vivo. Chronic animal experiments proved spatially selective cortical activation after epiretinal stimulation with a 25-channel implant. Research results have been transferred successfully to companies that currently work on the medical device approval of these retinal vision prostheses in Europe and in the USA.
Effects of cortical damage on binocular depth perception.
Bridge, Holly
2016-06-19
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.
Effects of cortical damage on binocular depth perception
2016-01-01
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269597
Huang, Kuo-Sen; Mark, David; Gandenberger, Frank Ulrich
2006-01-01
The plate::vision is a high-throughput multimode reader capable of reading absorbance, fluorescence, fluorescence polarization, time-resolved fluorescence, and luminescence. Its performance has been shown to be quite comparable with other readers. When the reader is integrated into the plate::explorer, an ultrahigh-throughput screening system with event-driven software and parallel plate-handling devices, it becomes possible to run complicated assays with kinetic readouts in high-density microtiter plate formats for high-throughput screening. For the past 5 years, we have used the plate::vision and the plate::explorer to run screens and have generated more than 30 million data points. Their throughput, performance, and robustness have speeded up our drug discovery process greatly.
Polish Experience of Implementing Vision Zero.
Jamroz, Kazimierz; Michalski, Lech; Żukowska, Joanna
2017-01-01
The aim of this study is to present an outline and the principles of Poland's road safety strategic programming as it has developed over the last 25 years since the first Integrated Road Safety System with a strong focus on Sweden's "Vision Zero". Countries that have successfully improved road safety have done so by following strategies centred around the idea that people are not infallible and will make mistakes. The human body can only take a limited amount of energy upon impact, so roads, vehicles and road safety programmes must be designed to address this. The article gives a summary of Poland's experience of programming preventative measures that have "Vision Zero" as their basis. It evaluates the effectiveness of relevant programmes.
Integrated long-range UAV/UGV collaborative target tracking
NASA Astrophysics Data System (ADS)
Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv
2009-05-01
Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.
Healthcare provider education: from institutional boxes to dynamic networks.
Eisler, George
2009-01-01
The world recognizes the need for close collaboration in planning between the healthcare system and the post-secondary education system; this has also been advocated in the lead article. Forums and mechanisms to facilitate this collaboration are being implemented from local to global environments. Beyond the focus on competency gaps, there are important functional co-dependencies between healthcare and post-secondary education, including the need for a more formalized continuous quality improvement approach at the inter-organizational system level. The case for this close and continuous collaborative relationship is based on the following: (1) a close functional relationship, (2) joint responsibility for healthcare provider education, (3) the urgent need to address the workforce and education strategies for almost all healthcare services areas and (4) the factors that characterize successful and sustained quality improvement in complex adaptive systems. A go-forward vision consisting of an integrated web of academic health networks is proposed, each with its particular shared vision and aligned with an overall vision for healthcare in each provincial jurisdiction, as well as with national and global healthcare objectives.
Quantitative systems toxicology
Bloomingdale, Peter; Housand, Conrad; Apgar, Joshua F.; Millard, Bjorn L.; Mager, Donald E.; Burke, John M.; Shah, Dhaval K.
2017-01-01
The overarching goal of modern drug development is to optimize therapeutic benefits while minimizing adverse effects. However, inadequate efficacy and safety concerns remain to be the major causes of drug attrition in clinical development. For the past 80 years, toxicity testing has consisted of evaluating the adverse effects of drugs in animals to predict human health risks. The U.S. Environmental Protection Agency recognized the need to develop innovative toxicity testing strategies and asked the National Research Council to develop a long-range vision and strategy for toxicity testing in the 21st century. The vision aims to reduce the use of animals and drug development costs through the integration of computational modeling and in vitro experimental methods that evaluates the perturbation of toxicity-related pathways. Towards this vision, collaborative quantitative systems pharmacology and toxicology modeling endeavors (QSP/QST) have been initiated amongst numerous organizations worldwide. In this article, we discuss how quantitative structure-activity relationship (QSAR), network-based, and pharmacokinetic/pharmacodynamic modeling approaches can be integrated into the framework of QST models. Additionally, we review the application of QST models to predict cardiotoxicity and hepatotoxicity of drugs throughout their development. Cell and organ specific QST models are likely to become an essential component of modern toxicity testing, and provides a solid foundation towards determining individualized therapeutic windows to improve patient safety. PMID:29308440
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Saiyed, Naseem H.; Swith, Marion Shayne
2005-01-01
When United States President George W. Bush announced the Vision for Space Exploration in January 2004, twelve propulsion and launch system projects were being pursued in the Next Generation Launch Technology (NGLT) Program. These projects underwent a review for near-term relevance to the Vision. Subsequently, five projects were chosen as advanced development projects by NASA s Exploration Systems Mission Directorate (ESMD). These five projects were Auxiliary Propulsion, Integrated Powerhead Demonstrator, Propulsion Technology and Integration, Vehicle Subsystems, and Constellation University Institutes. Recently, an NGLT effort in Vehicle Structures was identified as a gap technology that was executed via the Advanced Development Projects Office within ESMD. For all of these advanced development projects, there is an emphasis on producing specific, near-term technical deliverables related to space transportation that constitute a subset of the promised NGLT capabilities. The purpose of this paper is to provide a brief description of the relevancy review process and provide a status of the aforementioned projects. For each project, the background, objectives, significant technical accomplishments, and future plans will be discussed. In contrast to many of the current ESMD activities, these areas are providing hardware and testing to further develop relevant technologies in support of the Vision for Space Exploration.
Vayssier-Taussat, Muriel; Kazimirova, Maria; Hubalek, Zdenek; Hornok, Sándor; Farkas, Robert; Cosson, Jean-François; Bonnet, Sarah; Vourch, Gwenaël; Gasqui, Patrick; Mihalca, Andrei Daniel; Plantard, Olivier; Silaghi, Cornelia; Cutler, Sally; Rizzoli, Annapaola
2015-01-01
Ticks, as vectors of several notorious zoonotic pathogens, represent an important and increasing threat for human and animal health in Europe. Recent applications of new technology revealed the complexity of the tick microbiome, which may affect its vectorial capacity. Appreciation of these complex systems is expanding our understanding of tick-borne pathogens, leading us to evolve a more integrated view that embraces the ‘pathobiome’; the pathogenic agent integrated within its abiotic and biotic environments. In this review, we will explore how this new vision will revolutionize our understanding of tick-borne diseases. We will discuss the implications in terms of future research approaches that will enable us to efficiently prevent and control the threat posed by ticks. PMID:26610021
Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas
2013-08-01
This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.
360 degree vision system: opportunities in transportation
NASA Astrophysics Data System (ADS)
Thibault, Simon
2007-09-01
Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...
DOT National Transportation Integrated Search
1998-01-01
The Arizona Department of Transportation's (ADOT) Trailmaster Freeway Management System is integral to AZTech. Trailmaster provides state-of-the-art traffic management through a variety of electronic means, such as collecting and distributing traffic...
Studying the lower limit of human vision with a single-photon source
NASA Astrophysics Data System (ADS)
Holmes, Rebecca; Christensen, Bradley; Street, Whitney; Wang, Ranxiao; Kwiat, Paul
2015-05-01
Humans can detect a visual stimulus of just a few photons. Exactly how few is not known--psychological and physiological research have suggested that the detection threshold may be as low as one photon, but the question has never been directly tested. Using a source of heralded single photons based on spontaneous parametric downconversion, we can directly characterize the lower limit of vision. This system can also be used to study temporal and spatial integration in the visual system, and to study visual attention with EEG. We may eventually even be able to investigate how human observers perceive quantum effects such as superposition and entanglement. Our progress and some preliminary results will be discussed.
Simulation Based Acquisition for NASA's Office of Exploration Systems
NASA Technical Reports Server (NTRS)
Hale, Joe
2004-01-01
In January 2004, President George W. Bush unveiled his vision for NASA to advance U.S. scientific, security, and economic interests through a robust space exploration program. This vision includes the goal to extend human presence across the solar system, starting with a human return to the Moon no later than 2020, in preparation for human exploration of Mars and other destinations. In response to this vision, NASA has created the Office of Exploration Systems (OExS) to develop the innovative technologies, knowledge, and infrastructures to explore and support decisions about human exploration destinations, including the development of a new Crew Exploration Vehicle (CEV). Within the OExS organization, NASA is implementing Simulation Based Acquisition (SBA), a robust Modeling & Simulation (M&S) environment integrated across all acquisition phases and programs/teams, to make the realization of the President s vision more certain. Executed properly, SBA will foster better informed, timelier, and more defensible decisions throughout the acquisition life cycle. By doing so, SBA will improve the quality of NASA systems and speed their development, at less cost and risk than would otherwise be the case. SBA is a comprehensive, Enterprise-wide endeavor that necessitates an evolved culture, a revised spiral acquisition process, and an infrastructure of advanced Information Technology (IT) capabilities. SBA encompasses all project phases (from requirements analysis and concept formulation through design, manufacture, training, and operations), professional disciplines, and activities that can benefit from employing SBA capabilities. SBA capabilities include: developing and assessing system concepts and designs; planning manufacturing, assembly, transport, and launch; training crews, maintainers, launch personnel, and controllers; planning and monitoring missions; responding to emergencies by evaluating effects and exploring solutions; and communicating across the OExS enterprise, within the Government, and with the general public. The SBA process features empowered collaborative teams (including industry partners) to integrate requirements, acquisition, training, operations, and sustainment. The SBA process also utilizes an increased reliance on and investment in M&S to reduce design risk. SBA originated as a joint Industry and Department of Defense (DoD) initiative to define and integrate an acquisition process that employs robust, collaborative use of M&S technology across acquisition phases and programs. The SBA process was successfully implemented in the Air Force s Joint Strike Fighter (JSF) Program.
Morton, Michael
2016-01-01
In North West London, health and social care leaders decided to design a system of integrated care with the aim of improving the quality of care and supporting people to maintain independence and participation in their community. Patients and carers, known as ‘lay partners,’ were to be equal partners in co-production of the system. Lay partners were recruited by sending a role profile to health, social care and voluntary organisations and requesting nominations. They formed a Lay Partners Advisory Group from which pairs were allocated to system design workstreams, such as which population to focus on, financial flow, information technology and governance. A larger and more diverse Lay Partners Forum provided feedback on the emerging plans. A key outcome of this approach was the development of an integration toolkit co-designed with lay partners. Lay partners provided challenge, encouraged innovation, improved communication, and held the actions of other partners to account to ensure the vision and aims of the emerging integrated care system were met. Key lessons from the North West London experience for effective co-production include: recruiting patients and carers with experience of strategic work; commitment to the vision; willingness to challenge and to listen; strong connections within the community being served; and enough time to do the work. Including lay partners in co-design from the start, and at every level, was important. Agreeing the principles of working together, providing support and continuously recruiting lay representatives to represent their communities are keys to effective co-production. PMID:27616958
Morton, Michael; Paice, Elisabeth
2016-05-03
In North West London, health and social care leaders decided to design a system of integrated care with the aim of improving the quality of care and supporting people to maintain independence and participation in their community. Patients and carers, known as 'lay partners,' were to be equal partners in co-production of the system. Lay partners were recruited by sending a role profile to health, social care and voluntary organisations and requesting nominations. They formed a Lay Partners Advisory Group from which pairs were allocated to system design workstreams, such as which population to focus on, financial flow, information technology and governance. A larger and more diverse Lay Partners Forum provided feedback on the emerging plans. A key outcome of this approach was the development of an integration toolkit co-designed with lay partners. Lay partners provided challenge, encouraged innovation, improved communication, and held the actions of other partners to account to ensure the vision and aims of the emerging integrated care system were met. Key lessons from the North West London experience for effective co-production include: recruiting patients and carers with experience of strategic work; commitment to the vision; willingness to challenge and to listen; strong connections within the community being served; and enough time to do the work. Including lay partners in co-design from the start, and at every level, was important. Agreeing the principles of working together, providing support and continuously recruiting lay representatives to represent their communities are keys to effective co-production.
Mercer, Tim; Gardner, Adrian; Andama, Benjamin; Chesoli, Cleophas; Christoffersen-Deb, Astrid; Dick, Jonathan; Einterz, Robert; Gray, Nick; Kimaiyo, Sylvester; Kamano, Jemima; Maritim, Beryl; Morehead, Kirk; Pastakia, Sonak; Ruhl, Laura; Songok, Julia; Laktabai, Jeremiah
2018-05-08
The Academic Model Providing Access to Healthcare (AMPATH) has been a model academic partnership in global health for nearly three decades, leveraging the power of a public-sector academic medical center and the tripartite academic mission - service, education, and research - to the challenges of delivering health care in a low-income setting. Drawing our mandate from the health needs of the population, we have scaled up service delivery for HIV care, and over the last decade, expanded our focus on non-communicable chronic diseases, health system strengthening, and population health more broadly. Success of such a transformative endeavor requires new partnerships, as well as a unification of vision and alignment of strategy among all partners involved. Leveraging the Power of Partnerships and Spreading the Vision for Population Health. We describe how AMPATH built on its collective experience as an academic partnership to support the public-sector health care system, with a major focus on scaling up HIV care in western Kenya, to a system poised to take responsibility for the health of an entire population. We highlight global trends and local contextual factors that led to the genesis of this new vision, and then describe the key tenets of AMPATH's population health care delivery model: comprehensive, integrated, community-centered, and financially sustainable with a path to universal health coverage. Finally, we share how AMPATH partnered with strategic planning and change management experts from the private sector to use a novel approach called a 'Learning Map®' to collaboratively develop and share a vision of population health, and achieve strategic alignment with key stakeholders at all levels of the public-sector health system in western Kenya. We describe how AMPATH has leveraged the power of partnerships to move beyond the traditional disease-specific silos in global health to a model focused on health systems strengthening and population health. Furthermore, we highlight a novel, collaborative tool to communicate our vision and achieve strategic alignment among stakeholders at all levels of the health system. We hope this paper can serve as a roadmap for other global health partners to develop and share transformative visions for improving population health globally.
A real-time surface inspection system for precision steel balls based on machine vision
NASA Astrophysics Data System (ADS)
Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen
2016-07-01
Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.
[Incorporation and adaptation of the postmodern belief system].
Garzón Pérez, Adela
2012-01-01
Every society develops a particular system of beliefs that summarizes its vision of socio-political organization, culture and interpersonal relationships. Each of these three basic dimensions has different forms, depending on the spatial and temporal context of societies. The belief system of the service societies is characterized by a democratic vision of social and political organization, rejection of radical social changes and high levels of interpersonal trust. This paper empirically examines the incorporation and adaptation of the postmodern belief system in a sample of university students. The participants belong to a country that is slowly integrating into the service societies. We used a scale of postmodernity to analyze the incorporation of the postmodern belief system. The results indicate that there is a peculiar combination of the three basic dimensions of the postmodern belief system, where the postmodern conceptions of culture and social relationships have lower acceptance.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...
Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system
NASA Astrophysics Data System (ADS)
Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping
2015-05-01
Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.
Testing and evaluation of a wearable augmented reality system for natural outdoor environments
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Cook, James; Sherrill, Todd; Snarski, Stephen; Russler, Pat; Clipp, Brian; Karl, Robert; Wenger, Eric; Bennett, Matthew; Mauger, Jennifer; Church, William; Towles, Herman; MacCabe, Stephen; Webb, Jeffrey; Lupo, Jasper; Frahm, Jan-Michael; Dunn, Enrique; Leslie, Christopher; Welch, Greg
2013-05-01
This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive `heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier's view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods.
Maintaining a Cognitive Map in Darkness: The Need to Fuse Boundary Knowledge with Path Integration
Cheung, Allen; Ball, David; Milford, Michael; Wyeth, Gordon; Wiles, Janet
2012-01-01
Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's “cognitive map”, or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and – we conjecture – necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments. PMID:22916006
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis
NASA Astrophysics Data System (ADS)
Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario
2015-12-01
Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.
Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.
Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario
2015-12-01
Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.
Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin
2015-01-01
Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust. PMID:25912350
Test and Verification Approach for the NASA Constellation Program
NASA Technical Reports Server (NTRS)
Strong, Edward
2008-01-01
This viewgraph presentation is a test and verification approach for the NASA Constellation Program. The contents include: 1) The Vision for Space Exploration: Foundations for Exploration; 2) Constellation Program Fleet of Vehicles; 3) Exploration Roadmap; 4) Constellation Vehicle Approximate Size Comparison; 5) Ares I Elements; 6) Orion Elements; 7) Ares V Elements; 8) Lunar Lander; 9) Map of Constellation content across NASA; 10) CxP T&V Implementation; 11) Challenges in CxP T&V Program; 12) T&V Strategic Emphasis and Key Tenets; 13) CxP T&V Mission & Vision; 14) Constellation Program Organization; 15) Test and Evaluation Organization; 16) CxP Requirements Flowdown; 17) CxP Model Based Systems Engineering Approach; 18) CxP Verification Planning Documents; 19) Environmental Testing; 20) Scope of CxP Verification; 21) CxP Verification - General Process Flow; 22) Avionics and Software Integrated Testing Approach; 23) A-3 Test Stand; 24) Space Power Facility; 25) MEIT and FEIT; 26) Flight Element Integrated Test (FEIT); 27) Multi-Element Integrated Testing (MEIT); 28) Flight Test Driving Principles; and 29) Constellation s Integrated Flight Test Strategy Low Earth Orbit Servicing Capability.
78 FR 55086 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-09
...: Emerging Technologies and Training Neurosciences Integrated Review Group; Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: October 3-4, 2013. Time: 8:00 a.m. to 11:00 a.m... . Name of Committee: Bioengineering Sciences & Technologies Integrated Review Group; Biomaterials and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
FELIN: tailored optronics and systems solutions for dismounted combat
NASA Astrophysics Data System (ADS)
Milcent, A. M.
2009-05-01
The FELIN French modernization program for dismounted combat provides the Armies with info-centric systems which dramatically enhance the performances of the soldier and the platoon. Sagem now has available a portfolio of various equipments, providing C4I, data and voice digital communication, and enhanced vision for day and night operations, through compact high performance electro-optics. The FELIN system provides the infantryman with a high-tech integrated and modular system which increases significantly their detection, recognition, identification capabilities, their situation awareness and information sharing, and this in any dismounted close combat situation. Among the key technologies used in this system, infrared and intensified vision provide a significant improvement in capability, observation performance and protection of the ground soldiers. This paper presents in detail the developed equipments, with an emphasis on lessons learned from the technical and operational feedback from dismounted close combat field tests.
Intelligent surgical laser system configuration and software implementation
NASA Astrophysics Data System (ADS)
Hsueh, Chi-Fu T.; Bille, Josef F.
1992-06-01
An intelligent surgical laser system, which can help the ophthalmologist to achieve higher precision and control during their procedures, has been developed by ISL as model CLS 4001. In addition to the laser and laser delivery system, the system is also equipped with a vision system (IPU), robotics motion control (MCU), and a tracking closed loop system (ETS) that tracks the eye in three dimensions (X, Y and Z). The initial patient setup is computer controlled with guidance from the vision system. The tracking system is automatically engaged when the target is in position. A multi-level tracking system is developed by integrating our vision and tracking systems which have been able to maintain our laser beam precisely on target. The capabilities of the automatic eye setup and the tracking in three dimensions provides for improved accuracy and measurement repeatability. The system is operated through the Surgical Control Unit (SCU). The SCU communicates with the IPU and the MCU through both ethernet and RS232. Various scanning pattern (i.e., line, curve, circle, spiral, etc.) can be selected with given parameters. When a warning is activated, a voice message is played that will normally require a panel touch acknowledgement. The reliability of the system is ensured in three levels: (1) hardware, (2) software real time monitoring, and (3) user. The system is currently under clinical validation.
Helmet-Mounted Displays: Sensation, Perception and Cognition Issues
2009-01-01
Inc., web site: http://www.metavr.com/ technology/ papers /syntheticvision.html Helmetag, A., Halbig, C., Kubbat, W., and Schmidt, R. (1999...system-of-systems.” One integral system is a “head-borne vision enhancement” system (an HMD) that provides fused I2/ IR sensor imagery (U.S. Army Natick...Using microwave, radar, I2, infrared ( IR ), and other technology-based imaging sensors, the “seeing” range of the human eye is extended into the
Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.
Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734
Jensen, Jan L; Travers, Andrew H
2017-05-01
Nationally, emphasis on the importance of evidence-based practice (EBP) in emergency medicine and emergency medical services (EMS) has continuously increased. However, meaningful incorporation of effective and sustainable EBP into clinical and administrative decision-making remains a challenge. We propose a vision for EBP in EMS: Canadian EMS clinicians and leaders will understand and use the best available evidence for clinical and administrative decision-making, to improve patient health outcomes, the capability and quality of EMS systems of care, and safety of patients and EMS professionals. This vision can be implemented with the use of a structure, process, system, and outcome taxonomy to identify current barriers to true EBP, to recognize the opportunities that exist, and propose corresponding recommended strategies for local EMS agencies and at the national level. Framing local and national discussions with this approach will be useful for developing a cohesive and collaborative Canadian EBP strategy.
Development Of Autonomous Systems
NASA Astrophysics Data System (ADS)
Kanade, Takeo
1989-03-01
In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.
Integrating PCLIPS into ULowell's Lincoln Logs: Factory of the future
NASA Technical Reports Server (NTRS)
Mcgee, Brenda J.; Miller, Mark D.; Krolak, Patrick; Barr, Stanley J.
1990-01-01
We are attempting to show how independent but cooperating expert systems, executing within a parallel production system (PCLIPS), can operate and control a completely automated, fault tolerant prototype of a factory of the future (The Lincoln Logs Factory of the Future). The factory consists of a CAD system for designing the Lincoln Log Houses, two workcells, and a materials handling system. A workcell consists of two robots, part feeders, and a frame mounted vision system.
The Adaptive Optics Summer School Laboratory Activities
NASA Astrophysics Data System (ADS)
Ammons, S. M.; Severson, S.; Armstrong, J. D.; Crossfield, I.; Do, T.; Fitzgerald, M.; Harrington, D.; Hickenbotham, A.; Hunter, J.; Johnson, J.; Johnson, L.; Li, K.; Lu, J.; Maness, H.; Morzinski, K.; Norton, A.; Putnam, N.; Roorda, A.; Rossi, E.; Yelda, S.
2010-12-01
Adaptive Optics (AO) is a new and rapidly expanding field of instrumentation, yet astronomers, vision scientists, and general AO practitioners are largely unfamiliar with the root technologies crucial to AO systems. The AO Summer School (AOSS), sponsored by the Center for Adaptive Optics, is a week-long course for training graduate students and postdoctoral researchers in the underlying theory, design, and use of AO systems. AOSS participants include astronomers who expect to utilize AO data, vision scientists who will use AO instruments to conduct research, opticians and engineers who design AO systems, and users of high-bandwidth laser communication systems. In this article we describe new AOSS laboratory sessions implemented in 2006-2009 for nearly 250 students. The activity goals include boosting familiarity with AO technologies, reinforcing knowledge of optical alignment techniques and the design of optical systems, and encouraging inquiry into critical scientific questions in vision science using AO systems as a research tool. The activities are divided into three stations: Vision Science, Fourier Optics, and the AO Demonstrator. We briefly overview these activities, which are described fully in other articles in these conference proceedings (Putnam et al., Do et al., and Harrington et al., respectively). We devote attention to the unique challenges encountered in the design of these activities, including the marriage of inquiry-like investigation techniques with complex content and the need to tune depth to a graduate- and PhD-level audience. According to before-after surveys conducted in 2008, the vast majority of participants found that all activities were valuable to their careers, although direct experience with integrated, functional AO systems was particularly beneficial.
The contributions of vision and haptics to reaching and grasping
Stone, Kayla D.; Gonzalez, Claudia L. R.
2015-01-01
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference. PMID:26441777
Vision-based augmented reality system
NASA Astrophysics Data System (ADS)
Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan
2003-04-01
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
An Ecological Perspective on Learning Progressions as Road Maps for Learning
ERIC Educational Resources Information Center
Engelhard, George, Jr.; Sullivan, Rubye K.
2011-01-01
The Black, Wilson, and Yao (this issue) provide a wide-ranging commentary and vision of the interrelationships among curriculum, pedagogy, and assessment. Specifically, they describe how the Berkeley Evaluation & Assessment Research (BEAR) Center Assessment System can be used to integrate and systematize these areas. This commentary focuses on…
Artificial Intelligence and the High School Computer Curriculum.
ERIC Educational Resources Information Center
Dillon, Richard W.
1993-01-01
Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…
New Goals and Changed Roles: Re-Visioning Teacher Education.
ERIC Educational Resources Information Center
Hawley, Willis D.
1993-01-01
It is argued that the current agenda for improving teacher education only seeks to improve the present system. A new, radically restructured approach would train teachers to learn from their experiences, communicate clearly, know their subject matter, integrate ideas into practice, apply learning theory and child development principles, and…
NASA Astrophysics Data System (ADS)
McKinley, John B.; Pierson, Roger; Ertem, M. C.; Krone, Norris J., Jr.; Cramer, James A.
2008-04-01
Flight tests were conducted at Greenbrier Valley Airport (KLWB) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Norris Electro Optical Systems Corporation (NEOC) developmental ultraviolet (UV) sensor. These flights were sponsored by NEOC under a Federal Aviation Administration program, and the ultraviolet concepts, technology, system mechanization, and hardware for landing during low visibility landing conditions have been patented by NEOC. Imagery from the UV sensor, HUD guidance cues, and out-the-window videos were separately recorded at the engineering workstation for each approach. Inertial flight path data were also recorded. Various configurations of portable UV emitters were positioned along the runway edge and threshold. The UV imagery of the runway outline was displayed on the HUD along with guidance generated from the mission computer. Enhanced Flight Vision System (EFVS) approaches with the UV sensor were conducted from the initial approach fix to the ILS decision height in both VMC and IMC. Although the availability of low visibility conditions during the flight test period was limited, results from previous fog range testing concluded that UV EFVS has the performance capability to penetrate CAT II runway visual range obscuration. Furthermore, independent analysis has shown that existing runway light emit sufficient UV radiation without the need for augmentation other than lens replacement with UV transmissive quartz lenses. Consequently, UV sensors should qualify as conforming to FAA requirements for EFVS approaches. Combined with Synthetic Vision System (SVS), UV EFVS would function as both a precision landing aid, as well as an integrity monitor for the GPS and SVS database.
Betti, Viviana; Corbetta, Maurizio; de Pasquale, Francesco; Wens, Vincent; Della Penna, Stefania
2018-04-11
Networks hubs represent points of convergence for the integration of information across many different nodes and systems. Although a great deal is known on the topology of hub regions in the human brain, little is known about their temporal dynamics. Here, we examine the static and dynamic centrality of hub regions when measured in the absence of a task (rest) or during the observation of natural or synthetic visual stimuli. We used Magnetoencephalography (MEG) in humans (both sexes) to measure static and transient regional and network-level interaction in α- and β-band limited power (BLP) in three conditions: visual fixation (rest), viewing of movie clips (natural vision), and time-scrambled versions of the same clips (scrambled vision). Compared with rest, we observed in both movie conditions a robust decrement of α-BLP connectivity. Moreover, both movie conditions caused a significant reorganization of connections in the α band, especially between networks. In contrast, β-BLP connectivity was remarkably similar between rest and natural vision. Not only the topology did not change, but the joint dynamics of hubs in a core network during natural vision was predicted by similar fluctuations in the resting state. We interpret these findings by suggesting that slow-varying fluctuations of integration occurring in higher-order regions in the β band may be a mechanism to anticipate and predict slow-varying temporal patterns of the visual environment. SIGNIFICANCE STATEMENT A fundamental question in neuroscience concerns the function of spontaneous brain connectivity. Here, we tested the hypothesis that topology of intrinsic brain connectivity and its dynamics might predict those observed during natural vision. Using MEG, we tracked the static and time-varying brain functional connectivity when observers were either fixating or watching different movie clips. The spatial distribution of connections and the dynamics of centrality of a set of regions were similar during rest and movie in the β band, but not in the α band. These results support the hypothesis that the intrinsic β-rhythm integration occurs with a similar temporal structure during natural vision, possibly providing advanced information about incoming stimuli. Copyright © 2018 the authors 0270-6474/18/383858-14$15.00/0.
Integrating Mobile Robotics and Vision with Undergraduate Computer Science
ERIC Educational Resources Information Center
Cielniak, G.; Bellotto, N.; Duckett, T.
2013-01-01
This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…
ERIC Educational Resources Information Center
Albion, Peter R.; Ertmer, Peggy A.
2002-01-01
Discussion of the successful adoption and use of information technology in education focuses on teacher's personal philosophical beliefs and how they influence the successful integration of technology. Highlights include beliefs and teacher behavior; changing teachers' beliefs; and using technology to affect change in teachers' visions and…
ERIC Educational Resources Information Center
Gidley, Jennifer M.
2007-01-01
Rudolf Steiner and Ken Wilber claim that human consciousness is evolving beyond the "formal", abstract, intellectual mode toward a "post-formal", integral mode. Wilber calls this "vision-logic" and Steiner calls it "consciousness/spiritual soul". Both point to the emergence of more complex, dialectical,…
Enhanced Flight Vision Systems Operational Feasibility Study Using Radar and Infrared Sensors
NASA Technical Reports Server (NTRS)
Etherington, Timothy J.; Kramer, Lynda J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.
2015-01-01
Approach and landing operations during periods of reduced visibility have plagued aircraft pilots since the beginning of aviation. Although techniques are currently available to mitigate some of the visibility conditions, these operations are still ultimately limited by the pilot's ability to "see" required visual landing references (e.g., markings and/or lights of threshold and touchdown zone) and require significant and costly ground infrastructure. Certified Enhanced Flight Vision Systems (EFVS) have shown promise to lift the obscuration veil. They allow the pilot to operate with enhanced vision, in lieu of natural vision, in the visual segment to enable equivalent visual operations (EVO). An aviation standards document was developed with industry and government consensus for using an EFVS for approach, landing, and rollout to a safe taxi speed in visibilities as low as 300 feet runway visual range (RVR). These new standards establish performance, integrity, availability, and safety requirements to operate in this regime without reliance on a pilot's or flight crew's natural vision by use of a fail-operational EFVS. A pilot-in-the-loop high-fidelity motion simulation study was conducted at NASA Langley Research Center to evaluate the operational feasibility, pilot workload, and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 feet RVR by use of vision system technologies on a head-up display (HUD) without need or reliance on natural vision. Twelve crews flew various landing and departure scenarios in 1800, 1000, 700, and 300 RVR. This paper details the non-normal results of the study including objective and subjective measures of performance and acceptability. The study validated the operational feasibility of approach and departure operations and success was independent of visibility conditions. Failures were handled within the lateral confines of the runway for all conditions tested. The fail-operational concept with pilot in the loop needs further study.
Robot vision system programmed in Prolog
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Hack, Ralf
1995-10-01
This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)
Autonomic Computing: Panacea or Poppycock?
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2005-01-01
Autonomic Computing arose out of a need for a means to cope with rapidly growing complexity of integrating, managing, and operating computer-based systems as well as a need to reduce the total cost of ownership of today's systems. Autonomic Computing (AC) as a discipline was proposed by IBM in 2001, with the vision to develop self-managing systems. As the name implies, the influence for the new paradigm is the human body's autonomic system, which regulates vital bodily functions such as the control of heart rate, the body's temperature and blood flow-all without conscious effort. The vision is to create selfivare through self-* properties. The initial set of properties, in terms of objectives, were self-configuring, self-healing, self-optimizing and self-protecting, along with attributes of self-awareness, self-monitoring and self-adjusting. This self-* list has grown: self-anticipating, self-critical, self-defining, self-destructing, self-diagnosis, self-governing, self-organized, self-reflecting, and self-simulation, for instance.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1991-01-01
The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
Smart factory in the context of 4th industrial revolution: challenges and opportunities for Romania
NASA Astrophysics Data System (ADS)
Pîrvu, B. C.; Zamfirescu, C. B.
2017-08-01
Manufacturing companies, independent of operation sector and size, must be able to produce lot size one products, just-in-time at a competitive cost. Coping with this high adaptability and short reaction times proves to be very challenging. New approaches must be taken into consideration for designing modular, intelligent and cooperative production systems which are easy to integrate with the entire factory. The coined term for this network of intelligent interacting artefacts system is cyber-physical systems (CPS). CPS is often used in the context of Industry 4.0 - or what many consider the forth industrial revolution. The paper presents an overview of key technological and social requirements to map the Smart Factory vision into reality. Finally, global and Romanian specific challenges hindering the vision of a true Smart Factory to become reality are presented.
Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications
NASA Astrophysics Data System (ADS)
Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon
1997-04-01
A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.
Topography from shading and stereo
NASA Technical Reports Server (NTRS)
Horn, Berthold K. P.
1994-01-01
Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Integrating shape from shading, binocular stereo, and photometric stereo yields a robust system for recovering detailed surface shape and surface reflectance information. Such a system is useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface.
Executing on Integration: The Key to Success in Mergers and Acquisitions.
Bradley, Carol
2016-01-01
Health care mergers and acquisitions require a clearly stated vision and exquisite planning of integration activities to provide the best possible conditions for a successful transaction. During the due diligence process, key steps can be taken to create a shared vision and a plan to inspire confidence and build enthusiasm for all stakeholders. Integration planning should include a defined structure, roles and responsibilities, as well as a method for evaluation.
Principles for Health System Capacity Planning: Insights for Healthcare Leaders.
Shaw, James; Wong, Ivy; Griffin, Bailey; Robertson, Michael; Bhatia, R Sacha
2017-01-01
Jurisdictions across Canada and around the world face the challenge of planning high-performing and sustainable health systems in response to growing healthcare demands. In this paper, we report on the process of developing principles for health system capacity planning by the Ministry of Health and Long-Term Care in Ontario. Integrating the results of a literature review on health system planning and a symposium with representatives from local health integration networks, we describe the following six principles in detail: (1) develop an aspirational vision, (2) establish clear leadership, (3) commit to stakeholder engagement, (4) engage patients and the public, (5) build analytics infrastructure and (6) revise policy when necessary.
Lack of integration governance in ERP development: a case study on causes and effects
NASA Astrophysics Data System (ADS)
Kähkönen, Tommi; Smolander, Kari; Maglyas, Andrey
2017-09-01
The development of an enterprise resource planning (ERP) system starts actually after it has been implemented and taken into use. It is necessary to integrate ERP with other business information systems inside and outside the company. With the grounded theory, we aim to understand how integration challenges emerged in a large manufacturing enterprise when the long-term ERP system reached the beginning of its retirement. Structural changes, an information technology governance model, lack of organisational vision, having no architectural descriptions, lack of collaboration, cost cutting, and organisational culture made integration governance troublesome. As a consequence, the enterprise suffered from several undesired effects, such as complex integration scenarios between internal systems, and failing to provide its customers the needed information. The reduction of costs strengthened the organisational silos further and led to unrealised business process improvements. We provide practitioners with four recommendations. First, the organisational goals for integration should be exposed. Second, when evaluating the needs and impacts of integration, a documented architectural description about the system landscape needs to be utilised. Third, the role of IT should be emphasised in development decision-making, and fourth, collaboration is the core ingredient for successful integration governance.
A Concept of Operations for an Integrated Vehicle Health Assurance System
NASA Technical Reports Server (NTRS)
Hunter, Gary W.; Ross, Richard W.; Berger, David E.; Lekki, John D.; Mah, Robert W.; Perey, Danie F.; Schuet, Stefan R.; Simon, Donald L.; Smith, Stephen W.
2013-01-01
This document describes a Concept of Operations (ConOps) for an Integrated Vehicle Health Assurance System (IVHAS). This ConOps is associated with the Maintain Vehicle Safety (MVS) between Major Inspections Technical Challenge in the Vehicle Systems Safety Technologies (VSST) Project within NASA s Aviation Safety Program. In particular, this document seeks to describe an integrated system concept for vehicle health assurance that integrates ground-based inspection and repair information with in-flight measurement data for airframe, propulsion, and avionics subsystems. The MVS Technical Challenge intends to maintain vehicle safety between major inspections by developing and demonstrating new integrated health management and failure prevention technologies to assure the integrity of vehicle systems between major inspection intervals and maintain vehicle state awareness during flight. The approach provided by this ConOps is intended to help optimize technology selection and development, as well as allow the initial integration and demonstration of these subsystem technologies over the 5 year span of the VSST program, and serve as a guideline for developing IVHAS technologies under the Aviation Safety Program within the next 5 to 15 years. A long-term vision of IVHAS is provided to describe a basic roadmap for more intelligent and autonomous vehicle systems.
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Younes, Mohammed Y
2016-09-01
Solid waste prediction is crucial for sustainable solid waste management. The collection of accurate waste data records is challenging in developing countries. Solid waste generation is usually correlated with economic, demographic and social factors. However, these factors are not constant due to population and economic growth. The objective of this research is to minimize the land requirements for solid waste disposal for implementation of the Malaysian vision of waste disposal options. This goal has been previously achieved by integrating the solid waste forecasting model, waste composition and the Malaysian vision. The modified adaptive neural fuzzy inference system (MANFIS) was employed to develop a solid waste prediction model and search for the optimum input factors. The performance of the model was evaluated using the root mean square error (RMSE) and the coefficient of determination (R(2)). The model validation results are as follows: RMSE for training=0.2678, RMSE for testing=3.9860 and R(2)=0.99. Implementation of the Malaysian vision for waste disposal options can minimize the land requirements for waste disposal by up to 43%. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vector disparity sensor with vergence control for active vision systems.
Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo
2012-01-01
This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.
Vector Disparity Sensor with Vergence Control for Active Vision Systems
Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo
2012-01-01
This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737
Intelligent Sensors: Strategies for an Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
The role of spatial integration in the perception of surface orientation with active touch.
Giachritsis, Christos D; Wing, Alan M; Lovell, Paul G
2009-10-01
Vision research has shown that perception of line orientation, in the fovea area, improves with line length (Westheimer & Ley, 1997). This suggests that the visual system may use spatial integration to improve perception of orientation. In the present experiments, we investigated the role of spatial integration in the perception of surface orientation using kinesthetic and proprioceptive information from shoulder and elbow. With their left index fingers, participants actively explored virtual slanted surfaces of different lengths and orientations, and were asked to reproduce an orientation or discriminate between two orientations. Results showed that reproduction errors and discrimination thresholds improve with surface length. This suggests that the proprioceptive shoulder-elbow system may integrate redundant spatial information resulting from extended arm movements to improve orientation judgments.
Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke
NASA Astrophysics Data System (ADS)
Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro
Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.
In-process fault detection for textile fabric production: onloom imaging
NASA Astrophysics Data System (ADS)
Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til
2011-05-01
Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-17
... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...
ERIC Educational Resources Information Center
Stürmer, Kathleen; Könings, Karen D.; Seidel, Tina
2015-01-01
Preservice teachers' professional vision is an important indicator of their initial acquisition of integrated knowledge structures within university-based teacher education. To date, empirical research investigating which factors contribute to explaining preservice teachers' professional vision is scarce. This study aims to determine which factors…
Group Emotions: The Social and Cognitive Functions of Emotions in Argumentation
ERIC Educational Resources Information Center
Polo, Claire; Lund, Kristine; Plantin, Christian; Niccolai, Gerald P.
2016-01-01
The learning sciences of today recognize the tri-dimensional nature of learning as involving cognitive, social and emotional phenomena. However, many computer-supported argumentation systems still fail in addressing the socio-emotional aspects of group reasoning, perhaps due to a lack of an integrated theoretical vision of how these three…
VTLS Inc.: The Company, the Products, the Services, the Vision.
ERIC Educational Resources Information Center
Chachra, Vinod; And Others
1993-01-01
Describes the range of products and services offered by VTLS, a company that offers comprehensive, integrated library automation software and customer support. VTLS's growth and development in the United States and abroad is described, and nine sidebar articles detail system features and applications in public, academic, and virtual libraries. (20…
ERIC Educational Resources Information Center
Campbell, Suzanne Hetzel; Crabtree, Robbin D.; Kelly, Patrick
2013-01-01
The powerful and complex mandates arising from reports such as "The Future of Nursing: Leading Change, Advancing Health" and "Health Professionals for a New Century: Transforming Education to Strengthen Health Systems in an Interdependent World" challenge colleges and universities to reconsider how they deliver nursing…
DOT National Transportation Integrated Search
1998-01-01
Improving safety is an essential element of AZTech's mission. By extending the use of advanced communications technology and integrating individual traffic management systems, AZTech facilitates : safety on the roadways. To improve the management of ...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-08
... Floyd and Company, Trinity Government SYS a Private Co, Verizon Network Integration Corp, Vision... Provider Inc., Teksystems, The Experts Inc., TM Floyd and Company, Trinity Government SYS a Private Co... Compuware Corp Comsys Information Technology SVC, Diversified Systems Inc., E- Corn LLC, Farrington...
Managing interoperability and complexity in health systems.
Bouamrane, M-M; Tao, C; Sarkar, I N
2015-01-01
In recent years, we have witnessed substantial progress in the use of clinical informatics systems to support clinicians during episodes of care, manage specialised domain knowledge, perform complex clinical data analysis and improve the management of health organisations' resources. However, the vision of fully integrated health information eco-systems, which provide relevant information and useful knowledge at the point-of-care, remains elusive. This journal Focus Theme reviews some of the enduring challenges of interoperability and complexity in clinical informatics systems. Furthermore, a range of approaches are proposed in order to address, harness and resolve some of the many remaining issues towards a greater integration of health information systems and extraction of useful or new knowledge from heterogeneous electronic data repositories.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864
Competing Visions for Consumer Engagement in the Dawn of the Trump Administration.
Hwang, Ann; Garrett, Danielle; Miller, Michael
Two different models of consumer engagement use largely the same language but represent 2 distinct paradigms: the first focuses on patients as partners in health care decision making and the second focuses on financial incentives/penalties for patients. While the 2 paradigms coexist to some degree, they have different implications particularly for populations with complex health and social needs. For these populations, financial barriers can undermine the ability to recognize and promote patients as partners in a system of integrated, coordinated care. We describe these 2 competing visions and their adoption to date and offer our assessment of future directions for consumer engagement.
The operating room of the future: observations and commentary.
Satava, Richard M
2003-09-01
The Operating Room of the Future is a construct upon which to develop the next generation of operating environments for the patient, surgeon, and operating team. Analysis of the suite of visions for the Operating Room of the Future reveals a broad set of goals, with a clear overall solution to create a safe environment for high-quality healthcare. The vision, although planned for the future, is based upon iteratively improving and integrating current systems, both technology and process. This must become the Operating Room of Today, which will require the enormous efforts described. An alternative future of the operating room, based upon emergence of disruptive technologies, is also presented.
Listening to Another Sense: Somatosensory Integration in the Auditory System
Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.
2014-01-01
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698
2005-03-01
to obtain a protocol customized to the needs of a specific setting, under control of an automated theorem proving system that can guarantee...new “compositional” method for protocol design and implementation, in which small microprotocols are combined to obtain a protocol customized to the...and Network Centric Enterprise (NCES) visions. This final report documents a wide range of contributions and technology transitions, including: A
1988-04-30
side it necessary and Identify’ by’ block n~nmbot) haptic hand, touch , vision, robot, object recognition, categorization 20. AGSTRPACT (Continue an...established that the haptic system has remarkable capabilities for object recognition. We define haptics as purposive touch . The basic tactual system...gathered ratings of the importance of dimensions for categorizing common objects by touch . Texture and hardness ratings strongly co-vary, which is
Oral Health Care Delivery Within the Accountable Care Organization.
Blue, Christine; Riggs, Sheila
2016-06-01
The accountable care organization (ACO) provides an opportunity to strategically design a comprehensive health system in which oral health works within primary care. A dental hygienist/therapist within the ACO represents value-based health care in action. Inspired by health care reform efforts in Minnesota, a vision of an accountable care organization that integrates oral health into primary health care was developed. Dental hygienists and dental therapists can help accelerate the integration of oral health into primary care, particularly in light of the compelling evidence confirming the cost-effectiveness of care delivered by an allied workforce. A dental insurance Chief Operating Officer and a dental hygiene educator used their unique perspectives and experience to describe the potential of an interdisciplinary team-based approach to individual and population health, including oral health, via an accountable care community. The principles of the patient-centered medical home and the vision for accountable care communities present a paradigm shift from a curative system of care to a prevention-based system that encompasses the behavioral, social, nutritional, economic, and environmental factors that impact health and well-being. Oral health measures embedded in the spectrum of general health care have the potential to ensure a truly comprehensive healthcare system. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Näsilä, Antti; Holmlund, Christer; Mannila, Rami; Näkki, Ismo; Ojanen, Harri J.; Akujärvi, Altti; Saari, Heikki; Fussen, Didier; Pieroux, Didier; Demoulin, Philippe
2016-10-01
PICASSO - A PICo-satellite for Atmospheric and Space Science Observations is an ESA project led by the Belgian Institute for Space Aeronomy, in collaboration with VTT Technical Research Centre of Finland Ltd, Clyde Space Ltd. (UK) and Centre Spatial de Liège (BE). The test campaign for the engineering model of the PICASSO VISION instrument, a miniaturized nanosatellite spectral imager, has been successfully completed. The test results look very promising. The proto-flight model of VISION has also been successfully integrated and it is waiting for the final integration to the satellite platform.
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Precision of computer-assisted core decompression drilling of the knee.
Beckmann, J; Goetz, J; Bäthis, H; Kalteis, T; Grifka, J; Perlick, L
2006-06-01
Core decompression by exact drilling into the ischemic areas is the treatment of choice in early stages of osteonecrosis of the femoral condyle. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision-navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. 20 sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany). Ten sawbones were drilled by fluoroscopic control only. A statistically significant difference with a mean distance of 0.58 mm in the navigated group and 0.98 mm in the control group regarding the distance to the desired mid-point of the lesion could be stated. Significant difference was further found in the number of drilling corrections as well as radiation time needed. The fluoroscopic-based VectorVision-navigation system shows a high feasibility and precision of computer-guided drilling with simultaneously reduction of radiation time and therefore could be integrated into clinical routine.
Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin
2017-03-27
Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.
Dynamic modulation of visual and electrosensory gains for locomotor control
Sutton, Erin E.; Demir, Alican; Stamper, Sarah A.; Fortune, Eric S.; Cowan, Noah J.
2016-01-01
Animal nervous systems resolve sensory conflict for the control of movement. For example, the glass knifefish, Eigenmannia virescens, relies on visual and electrosensory feedback as it swims to maintain position within a moving refuge. To study how signals from these two parallel sensory streams are used in refuge tracking, we constructed a novel augmented reality apparatus that enables the independent manipulation of visual and electrosensory cues to freely swimming fish (n = 5). We evaluated the linearity of multisensory integration, the change to the relative perceptual weights given to vision and electrosense in relation to sensory salience, and the effect of the magnitude of sensory conflict on sensorimotor gain. First, we found that tracking behaviour obeys superposition of the sensory inputs, suggesting linear sensorimotor integration. In addition, fish rely more on vision when electrosensory salience is reduced, suggesting that fish dynamically alter sensorimotor gains in a manner consistent with Bayesian integration. However, the magnitude of sensory conflict did not significantly affect sensorimotor gain. These studies lay the theoretical and experimental groundwork for future work investigating multisensory control of locomotion. PMID:27170650
Multisensory integration of colors and scents: insights from bees and flowers.
Leonard, Anne S; Masek, Pavel
2014-06-01
Karl von Frisch's studies of bees' color vision and chemical senses opened a window into the perceptual world of a species other than our own. A century of subsequent research on bees' visual and olfactory systems has developed along two productive but independent trajectories, leaving the questions of how and why bees use these two senses in concert largely unexplored. Given current interest in multimodal communication and recently discovered interplay between olfaction and vision in humans and Drosophila, understanding multisensory integration in bees is an opportunity to advance knowledge across fields. Using a classic ethological framework, we formulate proximate and ultimate perspectives on bees' use of multisensory stimuli. We discuss interactions between scent and color in the context of bee cognition and perception, focusing on mechanistic and functional approaches, and we highlight opportunities to further explore the development and evolution of multisensory integration. We argue that although the visual and olfactory worlds of bees are perhaps the best-studied of any non-human species, research focusing on the interactions between these two sensory modalities is vitally needed.
Relating binocular and monocular vision in strabismic and anisometropic amblyopia.
Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D
2006-06-01
To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.
An integrated dexterous robotic testbed for space applications
NASA Technical Reports Server (NTRS)
Li, Larry C.; Nguyen, Hai; Sauer, Edward
1992-01-01
An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.
Model-based object classification using unification grammars and abstract representations
NASA Astrophysics Data System (ADS)
Liburdy, Kathleen A.; Schalkoff, Robert J.
1993-04-01
The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.
Beyond the computer-based patient record: re-engineering with a vision.
Genn, B; Geukers, L
1995-01-01
In order to achieve real benefit from the potential offered by a Computer-Based Patient Record, the capabilities of the technology must be applied along with true re-engineering of healthcare delivery processes. University Hospital recognizes this and is using systems implementation projects, such as the catalyst, for transforming the way we care for our patients. Integration is fundamental to the success of these initiatives and this must be explicitly planned against an organized systems architecture whose standards are market-driven. University Hospital also recognizes that Community Health Information Networks will offer improved quality of patient care at a reduced overall cost to the system. All of these implementation factors are considered up front as the hospital makes its initial decisions on to how to computerize its patient records. This improves our chances for success and will provide a consistent vision to guide the hospital's development of new and better patient care.
NASA Technical Reports Server (NTRS)
Brooks, Rodney Allen; Stein, Lynn Andrea
1994-01-01
We describe a project to capitalize on newly available levels of computational resources in order to understand human cognition. We will build an integrated physical system including vision, sound input and output, and dextrous manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system will learn to 'think' by building on its bodily experiences to accomplish progressively more abstract tasks. Past experience suggests that in attempting to build such an integrated system we will have to fundamentally change the way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of intelligence. We expect to be able to better reconcile the theories that will be developed with current work in neuroscience.
Autonomous docking system for space structures and satellites
NASA Astrophysics Data System (ADS)
Prasad, Guru; Tajudeen, Eddie; Spenser, James
2005-05-01
Aximetric proposes Distributed Command and Control (C2) architecture for autonomous on-orbit assembly in space with our unique vision and sensor driven docking mechanism. Aximetric is currently working on ip based distributed control strategies, docking/mating plate, alignment and latching mechanism, umbilical structure/cord designs, and hardware/software in a closed loop architecture for smart autonomous demonstration utilizing proven developments in sensor and docking technology. These technologies can be effectively applied to many transferring/conveying and on-orbit servicing applications to include the capturing and coupling of space bound vehicles and components. The autonomous system will be a "smart" system that will incorporate a vision system used for identifying, tracking, locating and mating the transferring device to the receiving device. A robustly designed coupler for the transfer of the fuel will be integrated. Advanced sealing technology will be utilized for isolation and purging of resulting cavities from the mating process and/or from the incorporation of other electrical and data acquisition devices used as part of the overall smart system.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-01-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Astrophysics Data System (ADS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-02-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
Schwartz, Jeremy I; Dunkle, Ashley; Akiteng, Ann R; Birabwa-Male, Doreen; Kagimu, Richard; Mondo, Charles K; Mutungi, Gerald; Rabin, Tracy L; Skonieczny, Michael; Sykes, Jamila; Mayanja-Kizza, Harriet
2015-01-01
The burden of non-communicable diseases (NCDs) in low- and middle-income countries (LMICs) is accelerating. Given that the capacity of health systems in LMICs is already strained by the weight of communicable diseases, these countries find themselves facing a double burden of disease. NCDs contribute significantly to morbidity and mortality, thereby playing a major role in the cycle of poverty, and impeding development. Integrated approaches to health service delivery and healthcare worker (HCW) training will be necessary in order to successfully combat the great challenge posed by NCDs. In 2013, we formed the Uganda Initiative for Integrated Management of NCDs (UINCD), a multidisciplinary research collaboration that aims to present a systems approach to integrated management of chronic disease prevention, care, and the training of HCWs. Through broad-based stakeholder engagement, catalytic partnerships, and a collective vision, UINCD is working to reframe integrated health service delivery in Uganda.
NASA Astrophysics Data System (ADS)
Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella
In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.
An augmented-reality edge enhancement application for Google Glass.
Hwang, Alex D; Peli, Eli
2014-08-01
Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.
NASA Astrophysics Data System (ADS)
Mahajan, Ajay; Chitikeshi, Sanjeevi; Utterbach, Lucas; Bandhil, Pavan; Figueroa, Fernando
2006-05-01
This paper describes the application of intelligent sensors in the Integrated Systems Health Monitoring (ISHM) as applied to a rocket test stand. The development of intelligent sensors is attempted as an integrated system approach, i.e. one treats the sensors as a complete system with its own physical transducer, A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements associated with the rocket tests stands. These smart elements can be sensors, actuators or other devices. Though the immediate application is the monitoring of the rocket test stands, the technology should be generally applicable to the ISHM vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent sensors (PIS) and Virtual Intelligent Sensors (VIS).
Advanced integrated life support system update
NASA Technical Reports Server (NTRS)
Whitley, Phillip E.
1994-01-01
The Advanced Integrated Life Support System Program (AILSS) is an advanced development effort to integrate the life support and protection requirements using the U.S. Navy's fighter/attack mission as a starting point. The goal of AILSS is to optimally mate protection from altitude, acceleration, chemical/biological agent, thermal environment (hot, cold, and cold water immersion) stress as well as mission enhancement through improved restraint, night vision, and head-mounted reticules and displays to ensure mission capability. The primary emphasis to date has been to establish garment design requirements and tradeoffs for protection. Here the garment and the human interface are treated as a system. Twelve state-off-the-art concepts from government and industry were evaluated for design versus performance. On the basis of a combination of centrifuge, thermal manikin data, thermal modeling, and mobility studies, some key design parameters have been determined. Future efforts will concentrate on the integration of protection through garment design and the use of a single layer, multiple function concept to streamline the garment system.
Software as a service approach to sensor simulation software deployment
NASA Astrophysics Data System (ADS)
Webster, Steven; Miller, Gordon; Mayott, Gregory
2012-05-01
Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.
Vision Sensor-Based Road Detection for Field Robot Navigation
Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen
2015-01-01
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps
The paradox of pharmacy: A profession's house divided.
Brown, Daniel
2012-01-01
To describe the paradox in pharmacy between the vision of patient care and the reality of community pharmacy practice and to explore how integrated reimbursement for the retail prescription and linking cognitive patient care services directly to prescription processing could benefit the profession. A dichotomy exists between what many pharmacists do and what they've been trained to do. Pharmacy leaders have formulated a vision for pharmacists to become more involved in direct patient care. All graduates now receive PharmD-level training, and some leaders call for requirements of postgraduate residency training and board certification for pharmacists who provide patient care. How such requirements would relate to community pharmacy practice is unclear. The retail prescription remains the primary link between the pharmacist and the health care consumer. Cognitive services, such as medication therapy management (MTM), need to be integrated into the standard workflow of community pharmacies so as to become a natural extension of the professional services rendered in the process of filling a prescription. Current prescription fees are not sufficient to support legitimate professional services. A proposed integrated pricing system for retail prescriptions includes a $15 professional fee that is scaled upward for value-added services, such as MTM. Pharmacy includes a diversity of practice that has historically been a source of division. For pharmacists to reach their potential as patient care providers, the various factions within the profession must forge a unified vision of the future that addresses all realms of practice.
ERIC Educational Resources Information Center
Selig, Judith A.; And Others
This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…
A New Vision for Integrated Breast Care.
1999-09-01
management with recommendations for strategic information systems use and overall process improvements. " Wrote the business plan for a physician group... business and marketing plans for physician network expansion. Performed market research and internal operational and strategic analyses. Identified...Berkeley Master of Business Administration (MBA) May 1994 • Concentration: Marketing and Strategic Planning * Event Chairman, MBA Challenge for Charity
CTE Policy Past, Present, and Future: Driving Forces behind the Evolution of Federal Priorities
ERIC Educational Resources Information Center
Imperatore, Catherine; Hyslop, Alisha
2017-01-01
Federal legislation has driven and been receptive to the vision of a rigorous, relevant career and technical education (CTE) system integrated with academics and aligned across middle school, secondary school, and postsecondary education. This article uses a social policy analysis approach to trace the history of federal CTE policy throughout the…
DOT National Transportation Integrated Search
1998-01-01
Advanced communications technology is the engine that continually moves AZTech closer to its goal of integrating transportation systems throughout the region. At the heart of this technology is a state-of-the-art Closed Circuit Television (CCTV) syst...
Canadian Wildland Fire Strategy Project Management Team
2006-01-01
The Canadian Wildland Fire Strategy (CWFS) provides a vision for a new, innovative, and integrated approach to wildland fire management in Canada. It was developed under the auspices of the Canadian Council of Forest Ministers and seeks to balance the social, ecological, and economic aspects of wildland fire through a risk management framework that emphasizes hazard...
Human Factors Engineering as a System in the Vision for Exploration
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Smith, Danielle; Holden, Kritina
2006-01-01
In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation in order to identify HSI requirements for the lunar communications architecture. Throughout these ConOps exercises, HFE personnel are testing various tools and methodologies that have been identified in the literature. A key part of the effort is the identification of optimal processes, methods, and tools for these early development phase activities, such as ConOps, requirements development, and early conceptual design. An overview of the activities completed thus far, as well as the tools and methods investigated will be presented.
Bringing Vision to Practice: Planning and Provisioning the New Library Resource Center
ERIC Educational Resources Information Center
Wilson, Lisa
2004-01-01
The most critical factor in creating a successful school library is the development of a clear vision of the mission and functionality of this integral learning space. However, the process of bringing a vision to realization involves harsh realities and sensible planning. The budget will determine many purchasing decisions and therefore it is…
Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.
2016-01-01
Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
IoT Contextual Factors on Healthcare.
Michalakis, Konstantinos; Caridakis, George
2017-01-01
With the emergence of the Internet of Things, new services in healthcare will be available and existing systems will be integrated in the IoT framework, providing automated medical supervision and efficient medical treatment. Context awareness plays a critical role in realizing the vision of the IoT, providing rich contextual information that can help the system act more efficiently. Since context in healthcare has its unique characteristics, it is necessary to define an appropriate context aware framework for healthcare IoT applications. We identify this context as perceived in healthcare applications and describe the context aware procedures. We also present an architecture that connects the sensors that measure biometric data with the sensory networks of the environment and the various IoT middleware that reside in the geographical area. Finally, we discuss the challenges for the realization of this vision.
Civil use of night vision goggles within the National Airspace System
NASA Astrophysics Data System (ADS)
Winkel, James G.; Faber, Lorelei
2001-08-01
When properly employed, Night Vision Goggles (NVGs) improve a pilot's ability to see during periods of darkness. The resultant enhancement in situational awareness achieved when using NVGs, increases light safety during night VFR operations. FAA is constrained with a lack of requisite regulatory and guidance infrastructure to adequately facilitate the civil request for use in NVGs within the National Airspace System (NAS) Appliances and Equipment, is formed and tasked to develop: operational concept and operational requirements for NVG implementation into the NAS, minimum operational performance standards for NVGs, and training guidelines and considerations for NVG operations. This paper provides a historical perspective on use of NVGs within the NAS, the status of SC-196 work in progress, FAA integration of SC-196 committee products and the harmonization effort between EUROCAEs NVG committee and SC- 196.
Berge, Jerica M; Adamek, Margaret; Caspi, Caitlin; Loth, Katie A; Shanafelt, Amy; Stovitz, Steven D; Trofholz, Amanda; Grannon, Katherine Y; Nanney, Marilyn S
2017-08-01
Despite intense nationwide efforts to improve healthy eating and physical activity across the lifespan, progress has plateaued. Moreover, health inequities remain. Frameworks that integrate research, clinical practice, policy, and community resources to address weight-related behaviors are needed. Implementation and evaluation of integration efforts also remain a challenge. The purpose of this paper is to: (1) Describe the planning and development process of an integrator entity, HEAL (Healthy Eating and Activity across the Lifespan); (2) present outcomes of the HEAL development process including the HEAL vision, mission, and values statements; (3) define the planned integrator functions of HEAL; and (4) describe the ongoing evaluation of the integration process. HEAL team members used a theoretically-driven, evidence-based, systemic, twelve-month planning process to guide the development of HEAL and to lay the foundation for short- and long-term integration initiatives. Key development activities included a review of the literature and case studies, identifying guiding principles and infrastructure needs, conducting stakeholder/key informant interviews, and continuous capacity building among team members. Outcomes/deliverables of the first year of HEAL included a mission, vision, and values statements; definitions of integration and integrator functions and roles; a set of long-range plans; and an integration evaluation plan. Application of the HEAL integration model is currently underway through community solicited initiatives. Overall, HEAL aims to lead real world integrative work that coalesce across research, clinical practice, and policy with community resources to inspire a culture of health equity aimed at improving healthy eating and physical activity across the lifespan. Copyright © 2017 Elsevier Inc. All rights reserved.
Climate change adaptation for the US National Wildlife Refuge System
Griffith, Brad; Scott, J. Michael; Adamcik, Robert S.; Ashe, Daniel; Czech, Brian; Fischman, Robert; Gonzalez, Patrick; Lawler, Joshua J.; McGuire, A. David; Pidgorna, Anna
2009-01-01
Since its establishment in 1903, the National Wildlife Refuge System (NWRS) has grown to 635 units and 37 Wetland Management Districts in the United States and its territories. These units provide the seasonal habitats necessary for migratory waterfowl and other species to complete their annual life cycles. Habitat conversion and fragmentation, invasive species, pollution, and competition for water have stressed refuges for decades, but the interaction of climate change with these stressors presents the most recent, pervasive, and complex conservation challenge to the NWRS. Geographic isolation and small unit size compound the challenges of climate change, but a combined emphasis on species that refuges were established to conserve and on maintaining biological integrity, diversity, and environmental health provides the NWRS with substantial latitude to respond. Individual symptoms of climate change can be addressed at the refuge level, but the strategic response requires system-wide planning. A dynamic vision of the NWRS in a changing climate, an explicit national strategic plan to implement that vision, and an assessment of representation, redundancy, size, and total number of units in relation to conservation targets are the first steps toward adaptation. This adaptation must begin immediately and be built on more closely integrated research and management. Rigorous projections of possible futures are required to facilitate adaptation to change. Furthermore, the effective conservation footprint of the NWRS must be increased through land acquisition, creative partnerships, and educational programs in order for the NWRS to meet its legal mandate to maintain the biological integrity, diversity, and environmental health of the system and the species and ecosystems that it supports.
Pasquali, Sara K.; Jacobs, Jeffrey P.; Farber, Gregory K.; Bertoch, David; Blume, Elizabeth D.; Burns, Kristin M.; Campbell, Robert; Chang, Anthony C.; Chung, Wendy K.; Riehle-Colarusso, Tiffany; Curtis, Lesley H.; Forrest, Christopher B.; Gaynor, William J.; Gaies, Michael G.; Go, Alan S.; Henchey, Paul; Martin, Gerard R.; Pearson, Gail; Pemberton, Victoria L.; Schwartz, Steven M.; Vincent, Robert; Kaltman, Jonathan R.
2016-01-01
The National Heart, Lung, and Blood Institute convened a Working Group in January 2015 to explore issues related to an integrated data network for congenital heart disease (CHD) research. The overall goal was to develop a common vision for how the rapidly increasing volumes of data captured across numerous sources can be managed, integrated, and analyzed to improve care and outcomes. This report summarizes the current landscape of CHD data, data integration methodologies used across other fields, key considerations for data integration models in CHD, and the short- and long-term vision and recommendations made by the Working Group. PMID:27045129
NASA Technical Reports Server (NTRS)
Liu, Dahai; Goodrich, Kenneth H.; Peak, Bob
2010-01-01
This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on the performance of pilots flying a light, single-engine general aviation airplane. We evaluated the effects and interactions of two levels of terrain portrayal, guidance symbology, and flight control response type on pilot performance during the conduct of a relatively complex instrument approach procedure. The terrain and guidance presentations were evaluated as elements of an integrated primary flight display system. The approach procedure used in the study included a steeply descending, curved segment as might be encountered in emerging, required navigation performance (RNP) based procedures. Pilot performance measures consisted of flight technical performance, perceived workload, perceived situational awareness and subjective preference. The results revealed that an elevation based generic terrain portrayal significantly improved perceived situation awareness without adversely affecting flight technical performance or workload. Other factors (pilot instrument rating, control response type, and guidance symbology) were not found to significantly affect the performance measures.
Integration of local motion is normal in amblyopia
NASA Astrophysics Data System (ADS)
Hess, Robert F.; Mansouri, Behzad; Dakin, Steven C.; Allen, Harriet A.
2006-05-01
We investigate the global integration of local motion direction signals in amblyopia, in a task where performance is equated between normal and amblyopic eyes at the single element level. We use an equivalent noise model to derive the parameters of internal noise and number of samples, both of which we show are normal in amblyopia for this task. This result is in apparent conflict with a previous study in amblyopes showing that global motion processing is defective in global coherence tasks [Vision Res. 43, 729 (2003)]. A similar discrepancy between the normalcy of signal integration [Vision Res. 44, 2955 (2004)] and anomalous global coherence form processing has also been reported [Vision Res. 45, 449 (2005)]. We suggest that these discrepancies for form and motion processing in amblyopia point to a selective problem in separating signal from noise in the typical global coherence task.
NASA Technical Reports Server (NTRS)
Young, Steven D.; Harrah, Steven D.; deHaag, Maarten Uijt
2002-01-01
Terrain Awareness and Warning Systems (TAWS) and Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data (e.g. terrain, obstacles, and/or features). As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. This lack of a quantifiable integrity level is one of the constraints that has limited certification and operational approval of TAWS/SVS to "advisory-only" systems for civil aviation. Previous work demonstrated the feasibility of using a real-time monitor to bound database integrity by using downward-looking remote sensing technology (i.e. radar altimeters). This paper describes an extension of the integrity monitor concept to include a forward-looking sensor to cover additional classes of terrain database faults and to reduce the exposure time associated with integrity threats. An operational concept is presented that combines established feature extraction techniques with a statistical assessment of similarity measures between the sensed and stored features using principles from classical detection theory. Finally, an implementation is presented that uses existing commercial-off-the-shelf weather radar sensor technology.
Hi-Vision telecine system using pickup tube
NASA Astrophysics Data System (ADS)
Iijima, Goro
1992-08-01
Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.
Design and implementation of a remote UAV-based mobile health monitoring system
NASA Astrophysics Data System (ADS)
Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix
2017-04-01
Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
Improving competitiveness through performance-measurement systems.
Stewart, L J; Lockamy, A
2001-12-01
Parallels exist between the competitive pressures felt by U.S. manufacturers over the past 30 years and those experienced by healthcare providers today. Increasing market deregulation, changing government policies, and growing consumerism have altered the healthcare arena. Responding to similar pressures, manufacturers adopted a strategic orientation driven by customer needs and expectations that led them to achieve high performance levels and surpass their competition. The adoption of integrated performance-measurement systems was instrumental in these firms' success. An integrated performance-measurement model for healthcare organizations can help to blend the organization's strategy with the demands of the contemporary healthcare environment. Performance-measurement systems encourage healthcare organizations to focus on their mission and vision by aligning their strategic objectives and resource-allocation decisions with customer requirements.
Lapaige, Véronique
2010-01-01
The development of a dynamic leadership coalition between practitioners and researchers/scientists – which is known in Canada as integrated knowledge translation (KT) – can play a major role in bridging the know-do gap in the health care and public health sectors. In public health, and especially in globally oriented public health, integrated KT is a dynamic, interactive (collaborative), and nonlinear phenomenon that goes beyond a reductionist vision of knowledge translation, to attain inter-, multi-, and even transdisciplinary status. Intimately embedded in its socioenvironmental context and closely connected with the complex interventions of multiple actors, the nonlinear process of integrated KT is based on a double principle: (1) the principle of transcendence of frontiers (sectorial, disciplinary, geographic, cultural, and cognitive), and (2) the principle of integration of knowledge beyond these frontiers. However, even though many authors agree on the overriding importance of integrated KT, there is as yet little understanding of the causal framework of integrated KT. Here, one can ask two general questions. Firstly, what “determines” integrated KT? Secondly, even if one wanted to apply a “transfrontier knowledge translation” vision, how should one go about doing so? For example, what would be the nature and qualities of a representative research program that applied a “transfrontier collaboration” approach? This paper focuses on the determinants of integrated KT within the burgeoning field of knowledge translation research (KT research). The paper is based on the results of a concurrent mixed method design which dealt with the complexity of building and sustaining effective coalitions and partnerships in the health care and public health sectors. The aims of this paper are: (1) to present an “integrated KT” conceptual framework which is global-context-sensitive, and (2) to promote the incorporation of a new “transfrontier knowledge translation” approach/vision designed primary for globally oriented public health researchers and health scientists. PMID:21197354
Fast and robust generation of feature maps for region-based visual attention.
Aziz, Muhammad Zaheer; Mertsching, Bärbel
2008-05-01
Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.
Basic design principles of colorimetric vision systems
NASA Astrophysics Data System (ADS)
Mumzhiu, Alex M.
1998-10-01
Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.
Guiding the mind's eye: improving communication and vision by external control of the scanpath
NASA Astrophysics Data System (ADS)
Barth, Erhardt; Dorr, Michael; Böhme, Martin; Gegenfurtner, Karl; Martinetz, Thomas
2006-02-01
Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gaze-contingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gaze-contingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.
Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †
Gu, Yanlei; Hsu, Li-Ta; Kamijo, Shunsuke
2015-01-01
This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. PMID:26633420
Luo, Jiebo; Boutell, Matthew
2005-05-01
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.
ERIC Educational Resources Information Center
Quartz, Karen Hunter; Kawasaki, Jarod; Sotelo, Daniel; Merino, Kimberly
2014-01-01
This paper reports the results of an 18-month integrated, problem-solving research study of one new school's efforts to create a K-12 system of student assessment data that reflects their innovative vision for personalized and student-centered instruction. Based on interview, observational, and documentary data, the authors report how…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less
Visioning the Centre for Place and Sustainability Studies through an embodied aesthetic wholeness
NASA Astrophysics Data System (ADS)
Sameshima, Pauline; Greenwood, David A.
2015-03-01
In the context of research universities, what kind of places and spaces can we create for ourselves that foster a holistic vision of learning and community, a vision that is responsive to the shifting social and ecological landscapes of the Anthropocene? How can these spaces simultaneously address the need to nurture both personal and cultural change? How do the frames we create enhance or limit our place-making? This paper offers one situated response to such questions as it theorizes and describes the arts integrated emergence of the Centre for Place and Sustainability Studies at Lakehead University. Drawing from critical cultural and ecological studies, we problematize creating spaces in both centers and margins, and offer an arts integrated vision of a space for diverse and evolving approaches to sustainability work: a meeting ground characterized by a commitment to parallax and embodied aesthetic wholeness.
An Augmented-Reality Edge Enhancement Application for Google Glass
Hwang, Alex D.; Peli, Eli
2014-01-01
Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871
Enhanced modeling and simulation of EO/IR sensor systems
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; May, Christopher
2015-05-01
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
Buildings of the Future Scoping Study: A Framework for Vision Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Goins, John D.
2015-02-01
The Buildings of the Future Scoping Study, funded by the U.S. Department of Energy (DOE) Building Technologies Office, seeks to develop a vision for what U.S. mainstream commercial and residential buildings could become in 100 years. This effort is not intended to predict the future or develop a specific building design solution. Rather, it will explore future building attributes and offer possible pathways of future development. Whether we achieve a more sustainable built environment depends not just on technologies themselves, but on how effectively we envision the future and integrate these technologies in a balanced way that generates economic, social,more » and environmental value. A clear, compelling vision of future buildings will attract the right strategies, inspire innovation, and motivate action. This project will create a cross-disciplinary forum of thought leaders to share their views. The collective views will be integrated into a future building vision and published in September 2015. This report presents a research framework for the vision development effort based on a literature survey and gap analysis. This document has four objectives. First, it defines the project scope. Next, it identifies gaps in the existing visions and goals for buildings and discusses the possible reasons why some visions did not work out as hoped. Third, it proposes a framework to address those gaps in the vision development. Finally, it presents a plan for a series of panel discussions and interviews to explore a vision that mitigates problems with past building paradigms while addressing key areas that will affect buildings going forward.« less
NASA Astrophysics Data System (ADS)
San Gil, Inigo; White, Marshall; Melendez, Eda; Vanderbilt, Kristin
The thirty-year-old United States Long Term Ecological Research Network has developed extensive metadata to document their scientific data. Standard and interoperable metadata is a core component of the data-driven analytical solutions developed by this research network Content management systems offer an affordable solution for rapid deployment of metadata centered information management systems. We developed a customized integrative metadata management system based on the Drupal content management system technology. Building on knowledge and experience with the Sevilleta and Luquillo Long Term Ecological Research sites, we successfully deployed the first two medium-scale customized prototypes. In this paper, we describe the vision behind our Drupal based information management instances, and list the features offered through these Drupal based systems. We also outline the plans to expand the information services offered through these metadata centered management systems. We will conclude with the growing list of participants deploying similar instances.
Multifunctional millimeter-wave radar system for helicopter safety
NASA Astrophysics Data System (ADS)
Goshi, Darren S.; Case, Timothy J.; McKitterick, John B.; Bui, Long Q.
2012-06-01
A multi-featured sensor solution has been developed that enhances the operational safety and functionality of small airborne platforms, representing an invaluable stride toward enabling higher-risk, tactical missions. This paper demonstrates results from a recently developed multi-functional sensor system that integrates a high performance millimeter-wave radar front end, an evidence grid-based integration processing scheme, and the incorporation into a 3D Synthetic Vision System (SVS) display. The front end architecture consists of a w-band real-beam scanning radar that generates a high resolution real-time radar map and operates with an adaptable antenna architecture currently configured with an interferometric capability for target height estimation. The raw sensor data is further processed within an evidence grid-based integration functionality that results in high-resolution maps in the region surrounding the platform. Lastly, the accumulated radar results are displayed in a fully rendered 3D SVS environment integrated with local database information to provide the best representation of the surrounding environment. The integrated system concept will be discussed and initial results from an experimental flight test of this developmental system will be presented. Specifically, the forward-looking operation of the system demonstrates the system's ability to produce high precision terrain mapping with obstacle detection and avoidance capability, showcasing the system's versatility in a true operational environment.
Towards a Decision Support System for Space Flight Operations
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Hogle, Charles; Ruszkowski, James
2013-01-01
The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.
Prevalence of non-strabismic anomalies of binocular vision in Tamil Nadu: report 2 of BAND study.
Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; George, Ronnie; Swaminathan, Meenakshi; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar
2017-11-01
Population-based studies on the prevalence of non-strabismic anomalies of binocular vision in ethnic Indians are more than two decades old. Based on indigenous normative data, the BAND (Binocular Vision Anomalies and Normative Data) study aims to report the prevalence of non-strabismic anomalies of binocular vision among school children in rural and urban Tamil Nadu. This population-based, cross-sectional study was designed to estimate the prevalence of non-strabismic anomalies of binocular vision in the rural and urban population of Tamil Nadu. In four schools, two each in rural and urban arms, 920 children in the age range of seven to 17 years were included in the study. Comprehensive binocular vision assessment was done for all children including evaluation of vergence and accommodative systems. In the first phase of the study, normative data of parameters of binocular vision were assessed followed by prevalence estimates of non-strabismic anomalies of binocular vision. The mean and standard deviation of the age of the sample were 12.7 ± 2.7 years. The prevalence of non-strabismic anomalies of binocular vision in the urban and rural arms was found to be 31.5 and 29.6 per cent, respectively. Convergence insufficiency was the most prevalent (16.5 and 17.6 per cent in the urban and rural arms, respectively) among all the types of non-strabismic anomalies of binocular vision. There was no gender predilection and no statistically significant differences were observed between the rural and urban arms in the prevalence of non-strabismic anomalies of binocular vision (Z-test, p > 0.05). The prevalence of non-strabismic anomalies of binocular vision was found to be higher in the 13 to 17 years age group (36.2 per cent) compared to seven to 12 years (25.1 per cent) (Z-test, p < 0.05). Non-strabismic binocular vision anomalies are highly prevalent among school children and the prevalence increases with age. With increasing near visual demands in the higher grades, these anomalies could significantly impact the reading efficiency of children. Thus, it is recommended that screening for anomalies of binocular vision should be integrated into the conventional vision screening protocol. © 2016 Optometry Australia.
NASA Astrophysics Data System (ADS)
Demetriou, Demetris; Campagna, Michele; Racetin, Ivana; Konecny, Milan
2017-09-01
INSPIRE is the EU's authoritative Spatial Data Infrastructure (SDI) in which each Member State provides access to their spatial data across a wide spectrum of data themes to support policy making. In contrast, Volunteered Geographic Information (VGI) is one type of user-generated geographic information where volunteers use the web and mobile devices to create, assemble and disseminate spatial information. There are similarities and differences between SDIs and VGI initiatives, as well as advantages and disadvantages. Thus, the integration of these two data sources will enhance what is offered to end users to facilitate decision makers and the wider community regarding solving complex spatial problems, managing emergency situations and getting useful information for peoples' daily activities. Although some efforts towards this direction have been arisen, several key issues need to be considered and resolved. Further to this integration, the vision is the development of a global integrated GIS platform, which extends the capabilities of a typical data-hub by embedding on-line spatial and non-spatial applications, to deliver both static and dynamic outputs to support planning and decision making. In this context, this paper discusses the challenges of integrating INSPIRE with VGI and outlines a generic framework towards creating a global integrated web-based GIS platform. The tremendous high speed evolution of the Web and Geospatial technologies suggest that this "super" global Geo-system is not far away.
Rees, Gwyneth; Holloway, Edith E; Craig, Graeme; Hepi, Niky; Coad, Samantha; Keeffe, Jill E; Lamoureux, Ecosse L
2012-12-01
To describe the integration of depression screening training into the professional development programme for low vision rehabilitation staff and report on staff evaluation of this training. Pre-post intervention study, in a single population of low vision rehabilitation staff. Three hundred and thirty-six staff from Australia's largest low vision rehabilitation organization, Vision Australia. Staff completed the depression screening and referral training as part of a wider professional development programme. A pre-post-training questionnaire was administered to all staff. Descriptive and non-parametric statistics were used to determine differences in self-reported knowledge, confidence, barriers to recognition and management of depression between baseline and post training. One hundred and seventy-two participants completed both questionnaires. Following training, participants reported an increased knowledge of depression, were more likely to respond to depression in their clients and reported to be more confident in managing depression (P < 0.05). A range of barriers were identified including issues related to the client (e.g. acceptance of referrals); practitioners (e.g. skill, role); availability and accessibility of psychological services; time and contact constraints; and environmental barriers (e.g. lack of privacy). Additional training incorporating more active and 'hands-on' sessions are likely to be required. This training is a promising first step in integrating a depression screening tool into low vision rehabilitation practice. Further work is needed to determine the barriers and facilitators to implementation in practice and to assess clients' acceptability and outcomes. © 2012 The Authors. Clinical and Experimental Ophthalmology © 2012 Royal Australian and New Zealand College of Ophthalmologists.
Integrated environmental modeling: a vision and roadmap for the future
Laniak, Gerard F.; Olchin, Gabriel; Goodall, Jonathan; Voinov, Alexey; Hill, Mary; Glynn, Pierre; Whelan, Gene; Geller, Gary; Quinn, Nigel; Blind, Michiel; Peckham, Scott; Reaney, Sim; Gaber, Noha; Kennedy, Philip R.; Hughes, Andrew
2013-01-01
Integrated environmental modeling (IEM) is inspired by modern environmental problems, decisions, and policies and enabled by transdisciplinary science and computer capabilities that allow the environment to be considered in a holistic way. The problems are characterized by the extent of the environmental system involved, dynamic and interdependent nature of stressors and their impacts, diversity of stakeholders, and integration of social, economic, and environmental considerations. IEM provides a science-based structure to develop and organize relevant knowledge and information and apply it to explain, explore, and predict the behavior of environmental systems in response to human and natural sources of stress. During the past several years a number of workshops were held that brought IEM practitioners together to share experiences and discuss future needs and directions. In this paper we organize and present the results of these discussions. IEM is presented as a landscape containing four interdependent elements: applications, science, technology, and community. The elements are described from the perspective of their role in the landscape, current practices, and challenges that must be addressed. Workshop participants envision a global scale IEM community that leverages modern technologies to streamline the movement of science-based knowledge from its sources in research, through its organization into databases and models, to its integration and application for problem solving purposes. Achieving this vision will require that the global community of IEM stakeholders transcend social, and organizational boundaries and pursue greater levels of collaboration. Among the highest priorities for community action are the development of standards for publishing IEM data and models in forms suitable for automated discovery, access, and integration; education of the next generation of environmental stakeholders, with a focus on transdisciplinary research, development, and decision making; and providing a web-based platform for community interactions (e.g., continuous virtual workshops).
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
NASA Astrophysics Data System (ADS)
Cao, Zhengcai; Yin, Longjie; Fu, Yili
2013-01-01
Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot.
Bringing UAVs to the fight: recent army autonomy research and a vision for the future
NASA Astrophysics Data System (ADS)
Moorthy, Jay; Higgins, Raymond; Arthur, Keith
2008-04-01
The Unmanned Autonomous Collaborative Operations (UACO) program was initiated in recognition of the high operational burden associated with utilizing unmanned systems by both mounted and dismounted, ground and airborne warfighters. The program was previously introduced at the 62nd Annual Forum of the American Helicopter Society in May of 20061. This paper presents the three technical approaches taken and results obtained in UACO. All three approaches were validated extensively in contractor simulations, two were validated in government simulation, one was flight tested outside the UACO program, and one was flight tested in Part 2 of UACO. Results and recommendations are discussed regarding diverse areas such as user training and human-machine interface, workload distribution, UAV flight safety, data link bandwidth, user interface constructs, adaptive algorithms, air vehicle system integration, and target recognition. Finally, a vision for UAV As A Wingman is presented.
Development of Moire machine vision
NASA Technical Reports Server (NTRS)
Harding, Kevin G.
1987-01-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Adaptive design lessons from professional architects
NASA Astrophysics Data System (ADS)
Geiger, Ray W.; Snell, J. T.
1993-09-01
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. In pursuing better ways to design cellular automata and qualification classifiers in a design process, we have found surprising success in exploring certain more esoteric approaches, e.g., the vision of interdisciplinary artistic discovery in and around creative problem solving. Our evaluation in research into vision and hybrid sensory systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and future oriented design practice. Discussion covers areas of case- based techniques of cognitive ergonomics, natural modeling sources, and an open architectural process of means/goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process.
NASA Technical Reports Server (NTRS)
1994-01-01
NASA's approach to continual improvement (CI) is a systems-oriented, agency-wide approach that builds on the past accomplishments of NASA Headquarters and its field installations and helps achieve NASA's vision, mission, and values. The NASA of the future will fully use the principles of continual improvement in every aspect of its operations. This NASA CI plan defines a systematic approach and a model for continual improvement throughout NASA, stressing systems integration and optimization. It demonstrates NASA's constancy of purpose for improvement - a consistent vision of NASA as a worldwide leader in top-quality science, technology, and management practices. The CI plan provides the rationale, structures, methods, and steps, and it defines NASA's short term (1-year) objectives for improvement. The CI plan presents the deployment strategies necessary for cascading the goals and objectives throughout the agency. It also provides guidance on implementing continual improvement with participation from top leadership and all levels of employees.
NASA Technical Reports Server (NTRS)
Schulte, Erin
2017-01-01
As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.
Development of Moire machine vision
NASA Astrophysics Data System (ADS)
Harding, Kevin G.
1987-10-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Low vision system for rapid near- and far-field magnification switching.
Ambrogi, Nicholas; Dias-Carlson, Rachel; Gantner, Karl; Gururaj, Anisha; Hanumara, Nevan; Narain, Jaya; Winter, Amos; Zielske, Iris; Satgunam, PremNandhini; Bagga, Deepak Kumar; Gothwal, Vijaya
2015-01-01
People suffering from low vision, a condition caused by a variety of eye-related diseases and/or disorders, find their ability to read greatly improved when text is magnified between 2 and 6 times. Assistive devices currently on the market are either geared towards reading text far away (~20 ft.) or very near (~2 ft.). This is a problem especially for students suffering from low vision, as they struggle to flip their focus between the chalkboard (far-field) and their notes (near- field). A solution to this problem is of high interest to eye care facilities in the developing world - no devices currently exist that have the aforementioned capabilities at an accessible price point. Through consultation with specialists at L.V. Prasad Eye Institute in India, the authors propose, design and demonstrate a device that fills this need, directed primarily at the Indian market. The device utilizes available hardware technologies to electronically capture video ahead of the user and zoom and display the image in real-time on LCD screens mounted in front of the user's eyes. This design is integrated as a wearable system in a glasses form-factor.
Synthetic Vision Systems - Operational Considerations Simulation Experiment
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.
2007-01-01
Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.
Synthetic vision systems: operational considerations simulation experiment
NASA Astrophysics Data System (ADS)
Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.
2007-04-01
Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.
On-road vehicle detection: a review.
Sun, Zehang; Bebis, George; Miller, Ronald
2006-05-01
Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.
EnVision+, a new dextran polymer-based signal enhancement technique for in situ hybridization (ISH).
Wiedorn, K H; Goldmann, T; Henne, C; Kühl, H; Vollmer, E
2001-09-01
Seventy paraffin-embedded cervical biopsy specimens and condylomata were tested for the presence of human papillomavirus (HPV) by conventional in situ hybridization (ISH) and ISH with subsequent signal amplification. Signal amplification was performed either by a commercial biotinyl-tyramide-based detection system [GenPoint (GP)] or by the novel two-layer dextran polymer visualization system EnVision+ (EV), in which both EV-horseradish peroxidase (EV-HRP) and EV-alkaline phosphatase (EV-AP) were applied. We could demonstrate for the first time, that EV in combination with preceding ISH results in a considerable increase in signal intensity and sensitivity without loss of specificity compared to conventional ISH. Compared to GP, EV revealed a somewhat lower sensitivity, as measured by determination of the integrated optical density (IOD) of the positively stained cells. However, EV is easier to perform, requires a shorter assay time, and does not raise the background problems that may be encountered with biotinyl-tyramide-based amplification systems. (J Histochem Cytochem 49:1067-1071, 2001)
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
Augmented reality and haptic interfaces for robot-assisted surgery.
Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N
2012-03-01
Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.
USC orthogonal multiprocessor for image processing with neural networks
NASA Astrophysics Data System (ADS)
Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid
1990-07-01
This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Schwartz, Jeremy I.; Dunkle, Ashley; Akiteng, Ann R.; Birabwa-Male, Doreen; Kagimu, Richard; Mondo, Charles K.; Mutungi, Gerald; Rabin, Tracy L.; Skonieczny, Michael; Sykes, Jamila; Mayanja-Kizza, Harriet
2015-01-01
Background The burden of non-communicable diseases (NCDs) in low- and middle-income countries (LMICs) is accelerating. Given that the capacity of health systems in LMICs is already strained by the weight of communicable diseases, these countries find themselves facing a double burden of disease. NCDs contribute significantly to morbidity and mortality, thereby playing a major role in the cycle of poverty, and impeding development. Methods Integrated approaches to health service delivery and healthcare worker (HCW) training will be necessary in order to successfully combat the great challenge posed by NCDs. Results In 2013, we formed the Uganda Initiative for Integrated Management of NCDs (UINCD), a multidisciplinary research collaboration that aims to present a systems approach to integrated management of chronic disease prevention, care, and the training of HCWs. Discussion Through broad-based stakeholder engagement, catalytic partnerships, and a collective vision, UINCD is working to reframe integrated health service delivery in Uganda. PMID:25563451
Wearable Improved Vision System for Color Vision Deficiency Correction
Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria
2017-01-01
Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827
NASA Astrophysics Data System (ADS)
Harrison, M.; Cocco, M.
2017-12-01
EPOS (European Plate Observing System) has been designed with the vision of creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the EPOS mission is to integrate the diverse and advanced European Research Infrastructures for solid Earth science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. To accomplish its mission, EPOS is engaging different stakeholders, to allow the Earth sciences to open new horizons in our understanding of the planet. EPOS also aims at contributing to prepare society for geo-hazards and to responsibly manage the exploitation of geo-resources. Through integration of data, models and facilities, EPOS will allow the Earth science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and human welfare. The research infrastructures (RIs) that EPOS is coordinating include: i) distributed geophysical observing systems (seismological and geodetic networks); ii) local observatories (including geomagnetic, near-fault and volcano observatories); iii) analytical and experimental laboratories; iv) integrated satellite data and geological information services; v) new services for natural and anthropogenic hazards; vi) access to geo-energy test beds. Here we present the activities planned for the implementation phase focusing on the TCS, the ICS and on their interoperability. We will discuss the data, data-products, software and services (DDSS) presently under implementation, which will be validated and tested during 2018. Particular attention in this talk will be given to connecting EPOS with similar global initiatives and identifying common best practice and approaches.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.
1971-01-01
Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.
Initiative in Concurrent Engineering (DICE). Phase 1.
1990-02-09
and power of commercial and military electronics systems. The continual evolution of HDE technology offers far greater flexibility in circuit design... powerful magnetic field of the permanent magnets in the sawyer motors. This makes it possible to have multiple robots in the workcell and to have them...Controller. The Adept IC was chosen because of its extensive processing power , integrated grayscale vision, standard 28 industrial I/O control
ERIC Educational Resources Information Center
Tacchi, Barbara M.
2013-01-01
Parent Liaisons can play an integral role in working to realize a vision for a strategic, comprehensive, and continuous system of family, school, and community partnerships that demonstrably contribute to children's development and school success. Parent involvement continues to receive an increasing amount of attention in federal and state…
Reduced vision selectively impairs spatial updating in fall-prone older adults.
Barrett, Maeve M; Doheny, Emer P; Setti, Annalisa; Maguinness, Corrina; Foran, Timothy G; Kenny, Rose Anne; Newell, Fiona N
2013-01-01
The current study examined the role of vision in spatial updating and its potential contribution to an increased risk of falls in older adults. Spatial updating was assessed using a path integration task in fall-prone and healthy older adults. Specifically, participants conducted a triangle completion task in which they were guided along two sides of a triangular route and were then required to return, unguided, to the starting point. During the task, participants could either clearly view their surroundings (full vision) or visuo-spatial information was reduced by means of translucent goggles (reduced vision). Path integration performance was measured by calculating the distance and angular deviation from the participant's return point relative to the starting point. Gait parameters for the unguided walk were also recorded. We found equivalent performance across groups on all measures in the full vision condition. In contrast, in the reduced vision condition, where participants had to rely on interoceptive cues to spatially update their position, fall-prone older adults made significantly larger distance errors relative to healthy older adults. However, there were no other performance differences between fall-prone and healthy older adults. These findings suggest that fall-prone older adults, compared to healthy older adults, have greater difficulty in reweighting other sensory cues for spatial updating when visual information is unreliable.
Integrating Sensory/Actuation Systems in Agricultural Vehicles
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles. PMID:24577525
Integrating sensory/actuation systems in agricultural vehicles.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-02-26
In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles.
Ren, Yanna; Ren, Yanling; Yang, Weiping; Tang, Xiaoyu; Wu, Fengxia; Wu, Qiong; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong
2018-02-01
Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for cross-modal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80-110ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280-300ms) compared to younger adults (210-240ms). In audition‑leading vision conditions, the earliest integration (80-110ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50ms to 100ms, late integration was delayed in both younger (from 230 to 290ms to 280-300ms) and older (from 210 to 240ms to 280-300ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults. Copyright © 2017 Elsevier B.V. All rights reserved.
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
Agaba, Morris; Cavener, Douglas R.
2017-01-01
Background The capacity of visually oriented species to perceive and respond to visual signal is integral to their evolutionary success. Giraffes are closely related to okapi, but the two species have broad range of phenotypic differences including their visual capacities. Vision studies rank giraffe’s visual acuity higher than all other artiodactyls despite sharing similar vision ecological determinants with many of them. The extent to which the giraffe’s unique visual capacity and its difference with okapi is reflected by changes in their vision genes is not understood. Methods The recent availability of giraffe and okapi genomes provided opportunity to identify giraffe and okapi vision genes. Multiple strategies were employed to identify thirty-six candidate mammalian vision genes in giraffe and okapi genomes. Quantification of selection pressure was performed by a combination of branch-site tests of positive selection and clade models of selection divergence through comparing giraffe and okapi vision genes and orthologous sequences from other mammals. Results Signatures of selection were identified in key genes that could potentially underlie giraffe and okapi visual adaptations. Importantly, some genes that contribute to optical transparency of the eye and those that are critical in light signaling pathway were found to show signatures of adaptive evolution or selection divergence. Comparison between giraffe and other ruminants identifies significant selection divergence in CRYAA and OPN1LW. Significant selection divergence was identified in SAG while positive selection was detected in LUM when okapi is compared with ruminants and other mammals. Sequence analysis of OPN1LW showed that at least one of the sites known to affect spectral sensitivity of the red pigment is uniquely divergent between giraffe and other ruminants. Discussion By taking a systemic approach to gene function in vision, the results provide the first molecular clues associated with giraffe and okapi vision adaptations. At least some of the genes that exhibit signature of selection may reflect adaptive response to differences in giraffe and okapi habitat. We hypothesize that requirement for long distance vision associated with predation and communication with conspecifics likely played an important role in the adaptive pressure on giraffe vision genes. PMID:28396824
Ishengoma, Edson; Agaba, Morris; Cavener, Douglas R
2017-01-01
The capacity of visually oriented species to perceive and respond to visual signal is integral to their evolutionary success. Giraffes are closely related to okapi, but the two species have broad range of phenotypic differences including their visual capacities. Vision studies rank giraffe's visual acuity higher than all other artiodactyls despite sharing similar vision ecological determinants with many of them. The extent to which the giraffe's unique visual capacity and its difference with okapi is reflected by changes in their vision genes is not understood. The recent availability of giraffe and okapi genomes provided opportunity to identify giraffe and okapi vision genes. Multiple strategies were employed to identify thirty-six candidate mammalian vision genes in giraffe and okapi genomes. Quantification of selection pressure was performed by a combination of branch-site tests of positive selection and clade models of selection divergence through comparing giraffe and okapi vision genes and orthologous sequences from other mammals. Signatures of selection were identified in key genes that could potentially underlie giraffe and okapi visual adaptations. Importantly, some genes that contribute to optical transparency of the eye and those that are critical in light signaling pathway were found to show signatures of adaptive evolution or selection divergence. Comparison between giraffe and other ruminants identifies significant selection divergence in CRYAA and OPN1LW . Significant selection divergence was identified in SAG while positive selection was detected in LUM when okapi is compared with ruminants and other mammals. Sequence analysis of OPN1LW showed that at least one of the sites known to affect spectral sensitivity of the red pigment is uniquely divergent between giraffe and other ruminants. By taking a systemic approach to gene function in vision, the results provide the first molecular clues associated with giraffe and okapi vision adaptations. At least some of the genes that exhibit signature of selection may reflect adaptive response to differences in giraffe and okapi habitat. We hypothesize that requirement for long distance vision associated with predation and communication with conspecifics likely played an important role in the adaptive pressure on giraffe vision genes.
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
Geoinformatics 2007: data to knowledge
Brady, Shailaja R.; Sinha, A. Krishna; Gundersen, Linda C.
2007-01-01
Geoinformatics is the term used to describe a variety of efforts to promote collaboration between the computer sciences and the geosciences to solve complex scientific questions. It refers to the distributed, integrated digital information system and working environment that provides innovative means for the study of the Earth systems, as well as other planets, through use of advanced information technologies. Geoinformatics activities range from major research and development efforts creating new technologies to provide high-quality, sustained production-level services for data discovery, integration and analysis, to small, discipline-specific efforts that develop earth science data collections and data analysis tools serving the needs of individual communities. The ultimate vision of Geoinformatics is a highly interconnected data system populated with high quality, freely available data, as well as, a robust set of software for analysis, visualization, and modeling.
Ackland, Peter
2012-01-01
In the first 12 years of VISION 2020 sound programmatic approaches have been developed that are capable of delivering equitable eye health services to even the most remote and impoverished communities. A body of evidence around the economic arguments for investment in eye health has been developed that has fuelled successful advocacy work resulting in supportive high level policy statements. More than a 100 national plans to achieve the elimination of avoidable blindness have been developed and some notable contributions made from the corporate and government sectors to resource eye health programs. Good progress has been made to control infectious blinding diseases and at the very least there is anecdotal evidence to suggest that the global increase in the prevalence of blindness and visual impairment has been reversed in recent years, despite the ever increasing and more elderly global population. However if we are to achieve the goal of VISION 2020 we require a considerable scaling up of current efforts–this will depend on our future success in two key areas: i) Successful advocacy and engagement at individual country level to secure significantly enhanced national government commitment to financing their own VISION 2020 plans.ii) A new approach to VISION 2020 thinking that integrates eye health into health system development and develops new partnerships with wider health development initiatives. PMID:22944746
Hybrid integration of VCSELs onto a silicon photonic platform for biosensing application
NASA Astrophysics Data System (ADS)
Lu, Huihui; Lee, Jun Su; Zhao, Yan; Cardile, Paolo; Daly, Aidan; Carroll, Lee; O'Brien, Peter
2017-02-01
This paper presents a technology of hybrid integration vertical cavity surface emitting lasers (VCSELs) directly on silicon photonics chip. By controlling the reflow of the solder balls used for electrical and mechanical bonding, the VCSELs were bonded at 10 degree to achieve the optimum angle-of-incidence to the planar grating coupler through vision based flip-chip techniques. The 1 dB discrepancy between optical loss values of flip-chip passive assembly and active alignment confirmed that the general purpose of the flip-chip design concept is achieved. This hybrid approach of integrating a miniaturized light source on chip opens the possibly of highly compact sensor system, which enable future portable and wearable diagnostics devices.
NASA Astrophysics Data System (ADS)
Fang, Yi-Chin; Wu, Bo-Wen; Lin, Wei-Tang; Jon, Jen-Liung
2007-11-01
Resolution and color are two main directions for measuring optical digital image, but it will be a hard work to integral improve the image quality of optical system, because there are many limits such as size, materials and environment of optical system design. Therefore, it is important to let blurred images as aberrations and noises or due to the characteristics of human vision as far distance and small targets to raise the capability of image recognition with artificial intelligence such as genetic algorithm and neural network in the condition that decreasing color aberration of optical system and not to increase complex calculation in the image processes. This study could achieve the goal of integral, economically and effectively to improve recognition and classification in low quality image from optical system and environment.
Brief Daily Periods of Unrestricted Vision Can Prevent Form-Deprivation Amblyopia
Wensveen, Janice M.; Harwerth, Ronald S.; Hung, Li-Fang; Ramamirtham, Ramkumar; Kee, Chea-su; Smith, Earl L.
2006-01-01
PURPOSE To characterize how the mechanisms that produce unilateral form-deprivation amblyopia integrate the effects of normal and abnormal vision over time, the effects of brief daily periods of unrestricted vision on the spatial vision losses produced by monocular form deprivation were investigated in infant monkeys. METHODS Beginning at 3 weeks of age, unilateral form deprivation was initiated in 18 infant monkeys by securing a diffuser spectacle lens in front of one eye and a clear plano lens in front of the fellow eye. During the treatment period (18 weeks), three infants wore the diffusers continuously. For the other experimental infants, the diffusers were removed daily and replaced with clear, zero-powered lenses for 1 (n = 5), 2 (n = 6), or 4 (n = 4) hours. Four infants reared with binocular zero-powered lenses and four normally reared monkeys provided control data. RESULTS The degree of amblyopia varied significantly with the daily duration of unrestricted vision. Continuous form deprivation caused severe amblyopia. However, 1 hour of unrestricted vision reduced the degree of amblyopia by 65%, 2 hours reduced the deficits by 90%, and 4 hours preserved near-normal spatial contrast sensitivity. CONCLUSIONS The severely amblyogenic effects of form deprivation in infant primates are substantially reduced by relatively short daily periods of unrestricted vision. The manner in which the mechanisms responsible for amblyopia integrate the effects of normal and abnormal vision over time promotes normal visual development and has important implications for the management of human infants with conditions that potentially cause amblyopia. PMID:16723458
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
NASA Astrophysics Data System (ADS)
Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank
2005-05-01
Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.
Pérez i de Lanuza, Guillem; Font, Enrique
2014-08-15
Ultraviolet (UV) vision and UV colour patches have been reported in a wide range of taxa and are increasingly appreciated as an integral part of vertebrate visual perception and communication systems. Previous studies with Lacertidae, a lizard family with diverse and complex coloration, have revealed the existence of UV-reflecting patches that may function as social signals. However, confirmation of the signalling role of UV coloration requires demonstrating that the lizards are capable of vision in the UV waveband. Here we use a multidisciplinary approach to characterize the visual sensitivity of a diverse sample of lacertid species. Spectral transmission measurements of the ocular media show that wavelengths down to 300 nm are transmitted in all the species sampled. Four retinal oil droplet types can be identified in the lacertid retina. Two types are pigmented and two are colourless. Fluorescence microscopy reveals that a type of colourless droplet is UV-transmitting and may thus be associated with UV-sensitive cones. DNA sequencing shows that lacertids have a functional SWS1 opsin, very similar at 13 critical sites to that in the presumed ancestral vertebrate (which was UV sensitive) and other UV-sensitive lizards. Finally, males of Podarcis muralis are capable of discriminating between two views of the same stimulus that differ only in the presence/absence of UV radiance. Taken together, these results provide convergent evidence of UV vision in lacertids, very likely by means of an independent photopigment. Moreover, the presence of four oil droplet types suggests that lacertids have a four-cone colour vision system. © 2014. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Issaei, Ali; Szczygiel, Lukasz; Hossein-Javaheri, Nima; Young, Mei; Molday, L. L.; Molday, R. S.; Sarunic, M. V.
2011-03-01
Scanning Laser Ophthalmoscopy (SLO) and Coherence Tomography (OCT) are complimentary retinal imaging modalities. Integration of SLO and OCT allows for both fluorescent detection and depth- resolved structural imaging of the retinal cell layers to be performed in-vivo. System customization is required to image rodents used in medical research by vision scientists. We are investigating multimodal SLO/OCT imaging of a rodent model of Stargardt's Macular Dystrophy which is characterized by retinal degeneration and accumulation of toxic autofluorescent lipofuscin deposits. Our new findings demonstrate the ability to track fundus autofluorescence and retinal degeneration concurrently.
NASA Technical Reports Server (NTRS)
Rhodes, Bradley; Meck, Janice
2005-01-01
NASA s National Vision for Space Exploration includes human travel beyond low earth orbit and the ultimate safe return of the crews. Crucial to fulfilling the vision is the successful and timely development of countermeasures for the adverse physiological effects on human systems caused by long term exposure to the microgravity environment. Limited access to in-flight resources for the foreseeable future increases NASA s reliance on ground-based analogs to simulate these effects of microgravity. The primary analog for human based research will be head-down bed rest. By this approach NASA will be able to evaluate countermeasures in large sample sizes, perform preliminary evaluations of proposed in-flight protocols and assess the utility of individual or combined strategies before flight resources are requested. In response to this critical need, NASA has created the Bed Rest Project at the Johnson Space Center. The Project establishes the infrastructure and processes to provide a long term capability for standardized domestic bed rest studies and countermeasure development. The Bed Rest Project design takes a comprehensive, interdisciplinary, integrated approach that reduces the resource overhead of one investigator for one campaign. In addition to integrating studies operationally relevant for exploration, the Project addresses other new Vision objectives, namely: 1) interagency cooperation with the NIH allows for Clinical Research Center (CRC) facility sharing to the benefit of both agencies, 2) collaboration with our International Partners expands countermeasure development opportunities for foreign and domestic investigators as well as promotes consistency in approach and results, 3) to the greatest degree possible, the Project also advances research by clinicians and academia alike to encourage return to earth benefits. This paper will describe the Project s top level goals, organization and relationship to other Exploration Vision Projects, implementation strategy, address Project deliverables, schedules and provide a status of bed rest campaigns presently underway.
Autonomous Energy Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey
With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less
Vision Integrating Strategies in Ophthalmology and Neurochemistry (VISION)
2014-02-01
ganglion cells from pressure-induced damage in a rat model of glaucoma . Brn3b also induced optic nerve regeneration in this model (Stankowska et al. 2013...of glaucoma o Gene therapy with Neuritin1 structurally and functionally protected the retina in ONC model o CHOP knockout mice were structurally and...retinocollicular pathway of mice in a novel model of glaucoma . 2013 Annual Meeting of Association for Research in Vision and Ophthalmology, Abstract 421. Liu
Integrity Determination for Image Rendering Vision Navigation
2016-03-01
identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Dave; Stephan, Eric G.; Wang, Weimin
Through its Building Technologies Office (BTO), the United States Department of Energy’s Office of Energy Efficiency and Renewable Energy (DOE-EERE) is sponsoring an effort to advance interoperability for the integration of intelligent buildings equipment and automation systems, understanding the importance of integration frameworks and product ecosystems to this cause. This is important to BTO’s mission to enhance energy efficiency and save energy for economic and environmental purposes. For connected buildings ecosystems of products and services from various manufacturers to flourish, the ICT aspects of the equipment need to integrate and operate simply and reliably. Within the concepts of interoperability liemore » the specification, development, and certification of equipment with standards-based interfaces that connect and work. Beyond this, a healthy community of stakeholders that contribute to and use interoperability work products must be developed. On May 1, 2014, the DOE convened a technical meeting to take stock of the current state of interoperability of connected equipment and systems in buildings. Several insights from that meeting helped facilitate a draft description of the landscape of interoperability for connected buildings, which focuses mainly on small and medium commercial buildings. This document revises the February 2015 landscape document to address reviewer comments, incorporate important insights from the Buildings Interoperability Vision technical meeting, and capture thoughts from that meeting about the topics to be addressed in a buildings interoperability vision. In particular, greater attention is paid to the state of information modeling in buildings and the great potential for near-term benefits in this area from progress and community alignment.« less
NASA Astrophysics Data System (ADS)
Chonacky, Norman; Winch, David
2008-04-01
There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.
Honeine, Jean-Louis; Crisafulli, Oscar; Schieppati, Marco
2017-02-01
The aim of this study was to test the effects of a concurrent cognitive task on the promptness of the sensorimotor integration and reweighting processes following addition and withdrawal of vision. Fourteen subjects stood in tandem while vision was passively added and removed. Subjects performed a cognitive task, consisting of counting backward in steps of three, or were "mentally idle." We estimated the time intervals following addition and withdrawal of vision at which body sway began to change. We also estimated the time constant of the exponential change in body oscillation until the new level of sway was reached, consistent with the current visual state. Under the mentally idle condition, mean latency was 0.67 and 0.46 s and the mean time constant was 1.27 and 0.59 s for vision addition and withdrawal, respectively. Following addition of vision, counting backward delayed the latency by about 300 ms, without affecting the time constant. Following withdrawal, counting backward had no significant effect on either latency or time constant. The extension by counting backward of the time interval to stabilization onset on addition of vision suggests a competition for allocation of cortical resources. Conversely, the absence of cognitive task effect on the rapid onset of destabilization on vision withdrawal, and on the relevant reweighting time course, advocates the intervention of a subcortical process. Diverting attention from a challenging standing task discloses a cortical supervision on the process of sensorimotor integration of new balance-stabilizing information. A subcortical process would instead organize the response to removal of the stabilizing sensory input. NEW & NOTEWORTHY This study is the first to test the effect of an arithmetic task on the time course of balance readjustment following visual withdrawal or addition. Performing such a cognitive task increases the time delay following addition of vision but has no effect on withdrawal dynamics. This suggests that sensorimotor integration following addition of a stabilizing signal is performed at a cortical level, whereas the response to its withdrawal is "automatic" and accomplished at a subcortical level. Copyright © 2017 the American Physiological Society.
Computational Unification: a Vision for Connecting Researchers
NASA Astrophysics Data System (ADS)
Troy, R. M.; Kingrey, O. J.
2002-12-01
Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.
COBALT CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M. III; Restrepo, Carolina I.; Robertson, Edward A.; Seubert, Carl R.; Amzajerdian, Farzin
2016-01-01
COBALT is a terrestrial test platform for development and maturation of GN&C (Guidance, Navigation and Control) technologies for PL&HA (Precision Landing and Hazard Avoidance). The project is developing a third generation, Langley Navigation Doppler Lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the JPL Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. These technologies together provide navigation that enables controlled precision landing. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive Vertical Test Bed (VTB) developed by Masten Space Systems (MSS), and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).
Implementation process and challenges for the community-based integrated care system in Japan
Tsutsui, Takako
2014-01-01
Background Since 10 years ago, Japan has been creating a long-term vision to face its peak in the number of older people that will be reached in 2025 when baby boomers will turn 75 years of age. In 2003, the government set up a study group called “Caring for older people in 2015” which led to a first reform of the Long-Term Care Insurance System in 2006. This study group was the first to suggest the creation of a community-based integrated care system. Reforms Three measures were taken in 2006: ‘Building an active ageing society: implementation of preventive care services’, ‘Improve sustainability: revision of the remuneration of facilities providing care’ and ‘Integration: establishment of a new service system’. These reforms are at the core of the community-based integrated care system. Discussion The socialization of long-term care that came along with the ageing of the population, and the second shift in Japan towards an increased reliance on the community can provide useful information for other ageing societies. As a super ageing society, the attempts from Japan to develop a rather unique system based on the widely spread concept of integrated care should also become an increasing focus of attention. PMID:24478614
Operational Based Vision Assessment Cone Contrast Test: Description and Operation
2016-06-01
designed to detect abnormalities and characterize the contrast sensitivity of the color mechanisms of the human visual system. The OBVA CCT will...than 1, the individual is determined to have an abnormal L-M mechanism. The L-M sensitivity of mildly abnormal individuals (anomalous trichromats...response pads. This hardware is integrated with custom software that generates the stimuli, collects responses, and analyzes the results as outlined in
A 10,000-Pen Nanoplotter with Integrated Ink Delivery System
2007-03-03
and Chemically Encoded Nanoparticle Materials in Biomolecule Detection,” MRS Bulletin, 2005, 30, 376-380. 35. Vega, R.A.; Maspoch, D.; Salaita, K...Burgenstock Conference, Burgenstock, Switzerland; “Nanostructures in Chemistry and Biodiagnostics,” (Mirkin, C.A.; 2006). 16. Center for Nanoscale Materials ...SEMI NanoForum, Chicago, IL; “A Vision for Nanoscience and Nanotechnology,” (Mirkin, C.A.; 2005). 21. MIT Materials Seminar Series, Cambridge, MA
2007-01-01
positioning and assembling? • Do nanoscale properties remain once the nanostructures are integrated up to the microscale? • How do we measure...viii Manufacturing at the Nanoscale 1 1. VISION Employing the novel properties and processes that are associated with the nanoscale—in the...Theory, modeling, and simulation software are being developed to investigate nanoscale material properties and synthesis of macromolecular systems with
2004-10-01
Information Proc- essing Technology Office (IPTO) for their support of this work. We thank Dr. John Salasin for his vision in conceiving these...ingredients of cognition identified in the INCOG framework presented herein, including: Dr. John R. Anderson, Mr. Albert-Laszlo Barabasi, Dr...Goertzel, Professor Marvin Minsky, Dr. Robert Hecht-Nielsen, Dr. Marcus J. Huber, Dr. John Laird, Professor Pat Langley, Dr. Christian Lebiere, Dr
Introduction: The SERENITY vision
NASA Astrophysics Data System (ADS)
Maña, Antonio; Spanoudakis, George; Kokolakis, Spyros
In this chapter we present an overview of the SERENITY approach. We describe the SERENITY model of secure and dependable applications and show how it addresses the challenge of developing, integrating and dynamically maintaining security and dependability mechanisms in open, dynamic, distributed and heterogeneous computing systems and in particular Ambient Intelligence scenarios. The chapter describes the basic concepts used in the approach and introduces the different processes supported by SERENITY, along with the tools provided.
MOBLAB: a mobile laboratory for testing real-time vision-based systems in path monitoring
NASA Astrophysics Data System (ADS)
Cumani, Aldo; Denasi, Sandra; Grattoni, Paolo; Guiducci, Antonio; Pettiti, Giuseppe; Quaglia, Giorgio
1995-01-01
In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.
Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin
2015-09-01
This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
Intelligent manipulation technique for multi-branch robotic systems
NASA Technical Reports Server (NTRS)
Chen, Alexander Y. K.; Chen, Eugene Y. S.
1990-01-01
New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.
Forging a poison prevention and control system: report of an Institute of Medicine committee.
Guyer, Bernard; Mavor, Anne
2005-01-01
The Committee forged a vision for a national poison prevention and control system that broadly integrates the current network of poison control centers with state and local public health departments responsible for monitoring populations. Implementing the Committee's recommendations, however, will require leadership from the Congress and the federal agencies to whom the report is addressed: HRSA and CDC. The next steps include amendments to existing legislation to establish the national system and to secure federal funding to assure stability of the system and systematic oversight by the federal agencies to hold all parties accountable for the performance of the system.
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
Gene Therapy for Color Blindness.
Hassall, Mark M; Barnard, Alun R; MacLaren, Robert E
2017-12-01
Achromatopsia is a rare congenital cause of vision loss due to isolated cone photoreceptor dysfunction. The most common underlying genetic mutations are autosomal recessive changes in CNGA3 , CNGB3 , GNAT2 , PDE6H , PDE6C , or ATF6 . Animal models of Cnga3 , Cngb3 , and Gnat2 have been rescued using AAV gene therapy; showing partial restoration of cone electrophysiology and integration of this new photopic vision in reflexive and behavioral visual tests. Three gene therapy phase I/II trials are currently being conducted in human patients in the USA, the UK, and Germany. This review details the AAV gene therapy treatments of achromatopsia to date. We also present novel data showing rescue of a Cnga3 -/- mouse model using an rAAV.CBA.CNGA3 vector. We conclude by synthesizing the implications of this animal work for ongoing human trials, particularly, the challenge of restoring integrated cone retinofugal pathways in an adult visual system. The evidence to date suggests that gene therapy for achromatopsia will need to be applied early in childhood to be effective.
NASA Astrophysics Data System (ADS)
Helbing, D.; Balietti, S.; Bishop, S.; Lukowicz, P.
2011-05-01
This contribution reflects on the comments of Peter Allen [1], Bikas K. Chakrabarti [2], Péter Érdi [3], Juval Portugali [4], Sorin Solomon [5], and Stefan Thurner [6] on three White Papers (WP) of the EU Support Action Visioneer (www.visioneer.ethz.ch). These White Papers are entitled "From Social Data Mining to Forecasting Socio-Economic Crises" (WP 1) [7], "From Social Simulation to Integrative System Design" (WP 2) [8], and "How to Create an Innovation Accelerator" (WP 3) [9]. In our reflections, the need and feasibility of a "Knowledge Accelerator" is further substantiated by fundamental considerations and recent events around the globe. newpara The Visioneer White Papers propose research to be carried out that will improve our understanding of complex techno-socio-economic systems and their interaction with the environment. Thereby, they aim to stimulate multi-disciplinary collaborations between ICT, the social sciences, and complexity science. Moreover, they suggest combining the potential of massive real-time data, theoretical models, large-scale computer simulations and participatory online platforms. By doing so, it would become possible to explore various futures and to expand the limits of human imagination when it comes to the assessment of the often counter-intuitive behavior of these complex techno-socio-economic-environmental systems. In this contribution, we also highlight the importance of a pluralistic modeling approach and, in particular, the need for a fruitful interaction between quantitative and qualitative research approaches. newpara In an appendix we briefly summarize the concept of the FuturICT flagship project, which will build on and go beyond the proposals made by the Visioneer White Papers. EU flagships are ambitious multi-disciplinary high-risk projects with a duration of at least 10 years amounting to an envisaged overall budget of 1 billion EUR [10]. The goal of the FuturICT flagship initiative is to understand and manage complex, global, socially interactive systems, with a focus on sustainability and resilience.
Toward the development of portable miniature intelligent electronic color identification devices
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Livingston, Peter; Jahshan, David; Evans, Rob
2004-03-01
The identification and differentiation of colours is a relatively problematic task for colour-impaired and partially vision-impaired persons and an impossible one for completely blind. In various contexts, this leads to a loss of independence or an increased risk of harm. The identification of colour using optoelectronic devices, on the other hand, can be done precisely and inexpensively. Additionally, breakthroughs in miniaturising and integrating colour sensors into biological systems may lead to significant advances in electronic implants for alleviating blindness. Here we present a functional handheld device developed for the identification of colour, intended for use by the vision-impaired. We discuss the features and limitations of the device and describe in detail one target application - the identification of different banknote denominations by the blind.
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2017-05-01
By this paper, the major goal is to investigate the Multi-CPU/FPGA SoC (System on Chip) design flow and to transfer a know-how and skills to rapidly design embedded real-time vision system. Our aim is to show how the use of these devices can be benefit for system level integration since they make possible simultaneous hardware and software development. We take the facial detection and pretreatments as case study since they have a great potential to be used in several applications such as video surveillance, building access control and criminal identification. The designed system use the Xilinx Zedboard platform. The last is the central element of the developed vision system. The video acquisition is performed using either standard webcam connected to the Zedboard via USB interface or several camera IP devices. The visualization of video content and intermediate results are possible with HDMI interface connected to HD display. The treatments embedded in the system are as follow: (i) pre-processing such as edge detection implemented in the ARM and in the reconfigurable logic, (ii) software implementation of motion detection and face detection using either ViolaJones or LBP (Local Binary Pattern), and (iii) application layer to select processing application and to display results in a web page. One uniquely interesting feature of the proposed system is that two functions have been developed to transmit data from and to the VDMA port. With the proposed optimization, the hardware implementation of the Sobel filter takes 27 ms and 76 ms for 640x480, and 720p resolutions, respectively. Hence, with the FPGA implementation, an acceleration of 5 times is obtained which allow the processing of 37 fps and 13 fps for 640x480, and 720p resolutions, respectively.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-01-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
NASA Astrophysics Data System (ADS)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-11-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
Initial SVS Integrated Technology Evaluation Flight Test Requirements and Hardware Architecture
NASA Technical Reports Server (NTRS)
Harrison, Stella V.; Kramer, Lynda J.; Bailey, Randall E.; Jones, Denise R.; Young, Steven D.; Harrah, Steven D.; Arthur, Jarvis J.; Parrish, Russell V.
2003-01-01
This document presents the flight test requirements for the Initial Synthetic Vision Systems Integrated Technology Evaluation flight Test to be flown aboard NASA Langley's ARIES aircraft and the final hardware architecture implemented to meet these requirements. Part I of this document contains the hardware, software, simulator, and flight operations requirements for this light test as they were defined in August 2002. The contents of this section are the actual requirements document that was signed for this flight test. Part II of this document contains information pertaining to the hardware architecture that was realized to meet these requirements as presented to and approved by a Critical Design Review Panel prior to installation on the B-757 Airborne Research Integrated Experiments Systems (ARIES) airplane. This information includes a description of the equipment, block diagrams of the architecture, layouts of the workstations, and pictures of the actual installations.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Visual and haptic integration in the estimation of softness of deformable objects
Cellini, Cristiano; Kaim, Lukas; Drewing, Knut
2013-01-01
Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510
Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri
2015-01-01
Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5–6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. PMID:24698161
Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri
2015-01-01
Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5-6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.
Knowledge integration in One Health policy formulation, implementation and evaluation
Esposito, Roberto; Canali, Massimo; Aragrande, Maurizio; Häsler, Barbara; Rüegg, Simon R
2018-01-01
Abstract The One Health concept covers the interrelationship between human, animal and environmental health and requires multistakeholder collaboration across many cultural, disciplinary, institutional and sectoral boundaries. Yet, the implementation of the One Health approach appears hampered by shortcomings in the global framework for health governance. Knowledge integration approaches, at all stages of policy development, could help to address these shortcomings. The identification of key objectives, the resolving of trade-offs and the creation of a common vision and a common direction can be supported by multicriteria analyses. Evidence-based decision-making and transformation of observations into narratives detailing how situations emerge and might unfold in the future can be achieved by systems thinking. Finally, transdisciplinary approaches can be used both to improve the effectiveness of existing systems and to develop novel networks for collective action. To strengthen One Health governance, we propose that knowledge integration becomes a key feature of all stages in the development of related policies. We suggest several ways in which such integration could be promoted. PMID:29531420
Knowledge integration in One Health policy formulation, implementation and evaluation.
Hitziger, Martin; Esposito, Roberto; Canali, Massimo; Aragrande, Maurizio; Häsler, Barbara; Rüegg, Simon R
2018-03-01
The One Health concept covers the interrelationship between human, animal and environmental health and requires multistakeholder collaboration across many cultural, disciplinary, institutional and sectoral boundaries. Yet, the implementation of the One Health approach appears hampered by shortcomings in the global framework for health governance. Knowledge integration approaches, at all stages of policy development, could help to address these shortcomings. The identification of key objectives, the resolving of trade-offs and the creation of a common vision and a common direction can be supported by multicriteria analyses. Evidence-based decision-making and transformation of observations into narratives detailing how situations emerge and might unfold in the future can be achieved by systems thinking. Finally, transdisciplinary approaches can be used both to improve the effectiveness of existing systems and to develop novel networks for collective action. To strengthen One Health governance, we propose that knowledge integration becomes a key feature of all stages in the development of related policies. We suggest several ways in which such integration could be promoted.
NASA Technical Reports Server (NTRS)
Bartolone, Anthony P.; Hughes, Monica F.; Wong, Douglas T.; Takallu, Mohammad A.
2004-01-01
Spatial disorientation induced by inadvertent flight into instrument meteorological conditions (IMC) continues to be a leading cause of fatal accidents in general aviation. The Synthetic Vision Systems General Aviation (SVS-GA) research element, an integral part of NASA s Aviation Safety and Security Program (AvSSP), is investigating a revolutionary display technology designed to mitigate low visibility events such as controlled flight into terrain (CFIT) and low-visibility loss of control (LVLoC). The integrated SVS Primary Flight Display (SVS-PFD) utilizes computer generated 3-dimensional imagery of the surrounding terrain augmented with flight path guidance symbology. This unique combination will provide GA pilots with an accurate representation of their environment and projection of their flight path, regardless of time of day or out-the-window (OTW) visibility. The initial Symbology Development for Head-Down Displays (SD-HDD) simulation experiment examined 16 display configurations on a centrally located high-resolution PFD installed in NASA s General Aviation Work Station (GAWS) flight simulator. The results of the experiment indicate that situation awareness (SA) can be enhanced without having a negative impact on flight technical error (FTE), by providing a general aviation pilot with an integrated SVS display to use when OTW visibility is obscured.
FleXConf: A Flexible Conference Assistant Using Context-Aware Notification Services
NASA Astrophysics Data System (ADS)
Armenatzoglou, Nikos; Marketakis, Yannis; Kriara, Lito; Apostolopoulos, Elias; Papavasiliou, Vicky; Kampas, Dimitris; Kapravelos, Alexandros; Kartsonakis, Eythimis; Linardakis, Giorgos; Nikitaki, Sofia; Bikakis, Antonis; Antoniou, Grigoris
Integrating context-aware notification services to ubiquitous computing systems aims at the provision of the right information to the right users, at the right time, in the right place, and on the right device, and constitutes a significant step towards the realization of the Ambient Intelligence vision. In this paper, we present FlexConf, a semantics-based system that supports location-based, personalized notification services for the assistance of conference attendees. Its special features include an ontology-based representation model, rule-based context-aware reasoning, and a novel positioning system for indoor environments.
A Graphical Operator Interface for a Telerobotic Inspection System
NASA Technical Reports Server (NTRS)
Kim, W. S.; Tso, K. S.; Hayati, S.
1993-01-01
Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
The deployment of information systems and information technology in field hospitals.
Crowe, Ian R J; Naguib, Raouf N G
2010-01-01
Information systems and related technologies continue to develop and have become an integral part of healthcare provision and hospital care in particular. Field hospitals typically operate in the most austere and difficult of conditions and have yet to fully exploit related technologies. This paper addresses those aspects of healthcare informatics, healthcare knowledge management and lean healthcare that can be applied to field hospitals, with a view to improving patient care. The aim is to provide a vision for the deployment of information systems and information technology in field hospitals, using the British Army's field hospital as a representative model.
Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun
2015-01-01
Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901
Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun
2015-01-08
Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.
Taking Stock: Existing Resources for Assessing a New Vision of Science Learning
ERIC Educational Resources Information Center
Alonzo, Alicia C.; Ke, Li
2016-01-01
A new vision of science learning described in the "Next Generation Science Standards"--particularly the science and engineering practices and their integration with content--pose significant challenges for large-scale assessment. This article explores what might be learned from advances in large-scale science assessment and…
Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?
NASA Astrophysics Data System (ADS)
Schild, Jonas; Masuch, Maic
2012-03-01
This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Interdisciplinary multisensory fusion: design lessons from professional architects
NASA Astrophysics Data System (ADS)
Geiger, Ray W.; Snell, J. T.
1992-11-01
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. Human efficacy for innovation in architectural design has always reflected more the projected perceptual vision of the designer visa vis the hierarchical spirit of the design process. In pursuing better ways to build and/or design things, we have found surprising success in exploring certain more esoteric applications. One of those applications is the vision of an artistic approach in/and around creative problem solving. Our evaluation in research into vision and visual systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and interdisciplinary design practice. Discussion will cover areas of cognitive ergonomics, natural modeling sources, and an open architectural process of means and goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process. One hypothesis is that the kinematic simulation of perceived connections between hard and soft sciences, centering on the life sciences and life in general, has become a very effective foundation for design theory and application.
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Exploration Medical Capability System Engineering Introduction and Vision
NASA Technical Reports Server (NTRS)
Mindock, J.; Reilly, J.
2017-01-01
Human exploration missions to beyond low Earth orbit destinations such as Mars will require more autonomous capability compared to current low Earth orbit operations. For the medical system, lack of consumable resupply, evacuation opportunities, and real-time ground support are key drivers toward greater autonomy. Recognition of the limited mission and vehicle resources available to carry out exploration missions motivates the Exploration Medical Capability (ExMC) Element's approach to enabling the necessary autonomy. The Element's work must integrate with the overall exploration mission and vehicle design efforts to successfully provide exploration medical capabilities. ExMC is applying systems engineering principles and practices to accomplish its integrative goals. This talk will briefly introduce the discipline of systems engineering and key points in its application to exploration medical capability development. It will elucidate technical medical system needs to be met by the systems engineering work, and the structured and integrative science and engineering approach to satisfying those needs, including the development of shared mental and qualitative models within and external to the human health and performance community. These efforts are underway to ensure relevancy to exploration system maturation and to establish medical system development that is collaborative with vehicle and mission design and engineering efforts.
NASA Technical Reports Server (NTRS)
Chavez, Carlos; Hammel, Bruce; Hammel, Allan; Moore, John R.
2014-01-01
Unmanned Aircraft Systems (UAS) represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the National Airspace System (NAS). To address this deficiency, NASA has established a project called UAS Integration in the NAS (UAS in the NAS), under the Integrated Systems Research Program (ISRP) of the Aeronautics Research Mission Directorate (ARMD). This project provides an opportunity to transition concepts, technology, algorithms, and knowledge to the Federal Aviation Administration (FAA) and other stakeholders to help them define the requirements, regulations, and issues for routine UAS access to the NAS. The safe, routine, and efficient integration of UAS into the NAS requires new radio frequency (RF) spectrum allocations and a new data communications system which is both secure and scalable with increasing UAS traffic without adversely impacting the Air Traffic Control (ATC) communication system. These data communications, referred to as Control and Non-Payload Communications (CNPC), whose purpose is to exchange information between the unmanned aircraft and the ground control station to ensure safe, reliable, and effective unmanned aircraft flight operation. A Communications Subproject within the UAS in the NAS Project has been established to address issues related to CNPC development, certification and fielding. The focus of the Communications Subproject is on validating and allocating new RF spectrum and data link communications to enable civil UAS integration into the NAS. The goal is to validate secure, robust data links within the allocated frequency spectrum for UAS. A vision, architectural concepts, and seed requirements for the future commercial UAS CNPC system have been developed by RTCA Special Committee 203 (SC-203) in the process of determining formal recommendations to the FAA in its role provided for under the Federal Advisory Committee Act. NASA intends to conduct its research and development in keeping with this vision and associated architectural concepts. The prototype communication systems developed and tested by NASA will be used to validate and update the initial SC-203 requirements in order to provide a foundation for SC-203's Minimum Aviation System Performance Standards (MASPS).
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
1991-12-01
integration. Threc papers considered the ergonomics of helmet design and the snugness of fit to the head and the integration of new helmet mounted devices...with existing equipment. Two papers considered the effects of novel helmet designs on the pilot’s ability to control head position and avoid fatigue. Two...the nature of information displayed, including data fused froml multiple sources and design of abstract symbologics that presernt paramcecis of fight
Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.
Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro
2016-04-22
The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.
NASA Astrophysics Data System (ADS)
Cross, Jack; Schneider, John; Cariani, Pete
2013-05-01
Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.
Evaluation of 5 different labeled polymer immunohistochemical detection systems.
Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A
2010-01-01
Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.
New ultraportable display technology and applications
NASA Astrophysics Data System (ADS)
Alvelda, Phillip; Lewis, Nancy D.
1998-08-01
MicroDisplay devices are based on a combination of technologies rooted in the extreme integration capability of conventionally fabricated CMOS active-matrix liquid crystal display substrates. Customized diffraction grating and optical distortion correction technology for lens-system compensation allow the elimination of many lenses and systems-level components. The MicroDisplay Corporation's miniature integrated information display technology is rapidly leading to many new defense and commercial applications. There are no moving parts in MicroDisplay substrates, and the fabrication of the color generating gratings, already part of the CMOS circuit fabrication process, is effectively cost and manufacturing process-free. The entire suite of the MicroDisplay Corporation's technologies was devised to create a line of application- specific integrated circuit single-chip display systems with integrated computing, memory, and communication circuitry. Next-generation portable communication, computer, and consumer electronic devices such as truly portable monitor and TV projectors, eyeglass and head mounted displays, pagers and Personal Communication Services hand-sets, and wristwatch-mounted video phones are among the may target commercial markets for MicroDisplay technology. Defense applications range from Maintenance and Repair support, to night-vision systems, to portable projectors for mobile command and control centers.
Improvement attributes in healthcare: implications for integrated care.
Harnett, Patrick John
2018-04-16
Purpose Healthcare quality improvement is a key concern for policy makers, regulators, carers and service users. Despite a contemporary consensus among policy makers that integrated care represents a means to substantially improve service outcomes, progress has been slow. Difficulties achieving sustained improvement at scale imply that methods employed are not sufficient and that healthcare improvement attributes may be different when compared to prior reference domains. The purpose of this paper is to examine and synthesise key improvement attributes relevant to a complex healthcare change process, specifically integrated care. Design/methodology/approach This study is based on an integrative literature review on systemic improvement in healthcare. Findings A central theme emerging from the literature review indicates that implementing systemic change needs to address the relationship between vision, methods and participant social dynamics. Practical implications Accommodating personal and professional network dynamics is required for systemic improvement, especially among high autonomy individuals. This reinforces the need to recognise the change process as taking place in a complex adaptive system where personal/professional purpose/meaning is central to the process. Originality/value Shared personal/professional narratives are insufficiently recognised as a powerful change force, under-represented in linear and rational empirical improvement approaches.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
Groundwater modeling in integrated water resources management--visions for 2020.
Refsgaard, Jens Christian; Højberg, Anker Lajer; Møller, Ingelise; Hansen, Martin; Søndergaard, Verner
2010-01-01
Groundwater modeling is undergoing a change from traditional stand-alone studies toward being an integrated part of holistic water resources management procedures. This is illustrated by the development in Denmark, where comprehensive national databases for geologic borehole data, groundwater-related geophysical data, geologic models, as well as a national groundwater-surface water model have been established and integrated to support water management. This has enhanced the benefits of using groundwater models. Based on insight gained from this Danish experience, a scientifically realistic scenario for the use of groundwater modeling in 2020 has been developed, in which groundwater models will be a part of sophisticated databases and modeling systems. The databases and numerical models will be seamlessly integrated, and the tasks of monitoring and modeling will be merged. Numerical models for atmospheric, surface water, and groundwater processes will be coupled in one integrated modeling system that can operate at a wide range of spatial scales. Furthermore, the management systems will be constructed with a focus on building credibility of model and data use among all stakeholders and on facilitating a learning process whereby data and models, as well as stakeholders' understanding of the system, are updated to currently available information. The key scientific challenges for achieving this are (1) developing new methodologies for integration of statistical and qualitative uncertainty; (2) mapping geological heterogeneity and developing scaling methodologies; (3) developing coupled model codes; and (4) developing integrated information systems, including quality assurance and uncertainty information that facilitate active stakeholder involvement and learning.
Design and control of an embedded vision guided robotic fish with multiple control surfaces.
Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.
Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces
Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413
Teleretinal Imaging to Screen for Diabetic Retinopathy in the Veterans Health Administration
Cavallerano, Anthony A.; Conlin, Paul R.
2008-01-01
Diabetes is the leading cause of adult vision loss in the United States and other industrialized countries. While the goal of preserving vision in patients with diabetes appears to be attainable, the process of achieving this goal poses a formidable challenge to health care systems. The large increase in the prevalence of diabetes presents practical and logistical challenges to providing quality care to all patients with diabetes. Given this challenge, the Veterans Health Administration (VHA) is increasingly using information technology as a means of improving the efficiency of its clinicians. The VHA has taken advantage of a mature computerized patient medical record system by integrating a program of digital retinal imaging with remote image interpretation (teleretinal imaging) to assist in providing eye care to the nearly 20% of VHA patients with diabetes. We describe this clinical pathway for accessing patients with diabetes in ambulatory care settings, evaluating their retinas for level of diabetic retinopathy with a teleretinal imaging system, and prioritizing their access into an eye and health care program in a timely and appropriate manner. PMID:19885175
An Integrated Global Atmospheric Composition Observing System: Progress and Impediments
NASA Astrophysics Data System (ADS)
Keating, T. J.
2016-12-01
In 2003-2005, a vision of an integrated global observing system for atmospheric composition and air quality emerged through several international forums (IGACO, 2004; GEO, 2005). In the decade since, the potential benefits of such a system for improving our understanding and mitigation of health and climate impacts of air pollution have become clearer and the needs more urgent. Some progress has been made towards the goal: technology has developed, capabilities have been demonstrated, and lessons have been learned. In Europe, the Copernicus Atmospheric Monitoring Service has blazed a trail for other regions to follow. Powerful new components of the emerging global system (e.g. a constellation of geostationary instruments) are expected to come on-line in the near term. But there are important gaps in the emerging system that are likely to keep us from achieving for some time the full benefits that were envisioned more than a decade ago. This presentation will explore the components and benefits of an integrated global observing system for atmospheric composition and air quality, some of the gaps and obstacles that exist in our current capabilities and institutions, and efforts that may be needed to achieve the envisioned system.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong; Hsiung, Pao-Ann; Wan, Chieh-Hao; Koong, Chorng-Shiuh; Liu, Tang-Kun; Yang, Yuanfan; Lin, Chu-Hsing; Chu, William Cheng-Chung
2009-02-01
A billiard ball tracking system is designed to combine with a visual guide interface to instruct users for a reliable strike. The integrated system runs on a PC platform. The system makes use of a vision system for cue ball, object ball and cue stick tracking. A least-squares error calibration process correlates the real-world and the virtual-world pool ball coordinates for a precise guidance line calculation. Users are able to adjust the cue stick on the pool table according to a visual guidance line instruction displayed on a PC monitor. The ideal visual guidance line extended from the cue ball is calculated based on a collision motion analysis. In addition to calculating the ideal visual guide, the factors influencing selection of the best shot among different object balls and pockets are explored. It is found that a tolerance angle around the ideal line for the object ball to roll into a pocket determines the difficulty of a strike. This angle depends in turn on the distance from the pocket to the object, the distance from the object to the cue ball, and the angle between these two vectors. Simulation results for tolerance angles as a function of these quantities are given. A selected object ball was tested extensively with respect to various geometrical parameters with and without using our integrated system. Players with different proficiency levels were selected for the experiment. The results indicate that all players benefit from our proposed visual guidance system in enhancing their skills, while low-skill players show the maximum enhancement in skill with the help of our system. All exhibit enhanced maximum and average hit-in rates. Experimental results on hit-in rates have shown a pattern consistent with that of the analysis. The hit-in rate is thus tightly connected with the analyzed tolerance angles for sinking object balls into a target pocket. These results prove the efficiency of our system, and the analysis results can be used to attain an efficient game-playing strategy.
Effective Science Instruction: What Does Research Tell Us? Second Edition
ERIC Educational Resources Information Center
Banilower, Eric; Cohen, Kim; Pasley, Joan; Weiss, Iris
2010-01-01
This brief distills the research on science learning to inform a common vision of science instruction and to describe the extent to which K-12 science education currently reflects this vision. A final section on implications for policy makers and science education practitioners describes actions that could integrate the findings from research into…
Understanding the Graphical Challenges Faced by Vision-Impaired Students in Australian Universities
ERIC Educational Resources Information Center
Butler, Matthew; Holloway, Leona; Marriott, Kim; Goncu, Cagatay
2017-01-01
Information graphics such as plots, maps, plans, charts, tables and diagrams form an integral part of the student learning experience in many disciplines. However, for a vision impaired student accessing such graphical materials can be problematic. This research seeks to understand the current state of accessible graphics provision in Australian…
Visioning as an Integral Element to Understanding Indigenous Learners' Transition to University
ERIC Educational Resources Information Center
Parent, Amy
2017-01-01
This article focuses on high school to university transitions for Indigenous youth at universities in British Columbia, Canada. The study is premised on an Indigenous research design, which utilizes the concept of visioning and a storywork methodology (Archibald, 2008). The results challenge existing institutional and psychological approaches to…
ERIC Educational Resources Information Center
Higbee, Jeanne L., Ed.; Lundell, Dana B., Ed.; Arendale, David R., Ed.
2005-01-01
This book explores the vision and contributions of the former General College, a program existing 74 years in the University of Minnesota, highlighting its history, mission, programs, research, and student services. This includes an evolving and dynamic program for teaching, learning, and research for student success in higher education. Following…
Vision-based obstacle recognition system for automated lawn mower robot development
NASA Astrophysics Data System (ADS)
Mohd Zin, Zalhan; Ibrahim, Ratnawati
2011-06-01
Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
Integration of autopatching with automated pipette and cell detection in vitro
Wu (吴秋雨), Qiuyu; Kolb, Ilya; Callahan, Brendan M.; Su, Zhaolun; Stoy, William; Kodandaramaiah, Suhasa B.; Neve, Rachael; Zeng, Hongkui; Boyden, Edward S.; Forest, Craig R.
2016-01-01
Patch clamp is the main technique for measuring electrical properties of individual cells. Since its discovery in 1976 by Neher and Sakmann, patch clamp has been instrumental in broadening our understanding of the fundamental properties of ion channels and synapses in neurons. The conventional patch-clamp method requires manual, precise positioning of a glass micropipette against the cell membrane of a visually identified target neuron. Subsequently, a tight “gigaseal” connection between the pipette and the cell membrane is established, and suction is applied to establish the whole cell patch configuration to perform electrophysiological recordings. This procedure is repeated manually for each individual cell, making it labor intensive and time consuming. In this article we describe the development of a new automatic patch-clamp system for brain slices, which integrates all steps of the patch-clamp process: image acquisition through a microscope, computer vision-based identification of a patch pipette and fluorescently labeled neurons, micromanipulator control, and automated patching. We validated our system in brain slices from wild-type and transgenic mice expressing channelrhodopsin 2 under the Thy1 promoter (line 18) or injected with a herpes simplex virus-expressing archaerhodopsin, ArchT. Our computer vision-based algorithm makes the fluorescent cell detection and targeting user independent. Compared with manual patching, our system is superior in both success rate and average trial duration. It provides more reliable trial-to-trial control of the patching process and improves reproducibility of experiments. PMID:27385800
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Monovision techniques for telerobots
NASA Technical Reports Server (NTRS)
Goode, P. W.; Carnils, K.
1987-01-01
The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.
Thinking Graphically: Connecting Vision and Cognition during Graph Comprehension
ERIC Educational Resources Information Center
Ratwani, Raj M.; Trafton, J. Gregory; Boehm-Davis, Deborah A.
2008-01-01
Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive…
Integration: The Key to Sustaining Kinesiology in Higher Education
ERIC Educational Resources Information Center
Gill, Diane L.
2007-01-01
Integration is the key to sustaining kinesiology as an academic and professional discipline in higher education. Following the vision of Amy Morris Homans, this paper focuses on integration in three ways. First, integration of our multidisciplinary scholarship, with a clear focus on physical activity, is essential to sustaining kinesiology as a…
Big Data and Nursing: Implications for the Future.
Topaz, Maxim; Pruinelli, Lisiane
2017-01-01
Big data is becoming increasingly more prevalent and it affects the way nurses learn, practice, conduct research and develop policy. The discipline of nursing needs to maximize the benefits of big data to advance the vision of promoting human health and wellbeing. However, current practicing nurses, educators and nurse scientists often lack the required skills and competencies necessary for meaningful use of big data. Some of the key skills for further development include the ability to mine narrative and structured data for new care or outcome patterns, effective data visualization techniques, and further integration of nursing sensitive data into artificial intelligence systems for better clinical decision support. We provide growth-path vision recommendations for big data competencies for practicing nurses, nurse educators, researchers, and policy makers to help prepare the next generation of nurses and improve patient outcomes trough better quality connected health.
Design of a dynamic test platform for autonomous robot vision systems
NASA Technical Reports Server (NTRS)
Rich, G. C.
1980-01-01
The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.
Night vision imaging systems design, integration, and verification in military fighter aircraft
NASA Astrophysics Data System (ADS)
Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David
2012-04-01
This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and rear cockpits at the various stages of the test campaign. This process allowed a considerable enhancement of the TORNADO NVIS configuration, giving a good medium-high level NVG operational capability to the aircraft. Further developments also include the design, integration and test of internal/external lighting for the Italian TORNADO "Mid Life Update" (MLU) and other programs, such as the AM-X aircraft internal/external lights modification/testing and the activities addressing low-altitude NVG operations with fast jets (e.g., TORNADO, AM-X, MB-339CD), a major issue being the safe ejection of aircrew with NVG and NVG modified helmets. Two options have been identified for solving this problem: namely the modification of the current Gentex HGU-55 helmets and the design of a new helmet incorporating a reliable NVG connection/disconnection device (i.e., a mechanical system fully integrated in the helmet frame), with embedded automatic disconnection capability in case of ejection.
New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots
Gonzalez-de-Soto, Mariano; Pajares, Gonzalo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976
New trends in robotics for agriculture: integration and assessment of a real fleet of robots.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrows, Clayton P.; Katz, Jessica R.; Cochran, Jaquelin M.
The Republic of the Philippines is home to abundant solar, wind, and other renewable energy (RE) resources that contribute to the national government's vision to ensure sustainable, secure, sufficient, accessible, and affordable energy. Because solar and wind resources are variable and uncertain, significant generation from these resources necessitates an evolution in power system planning and operation. To support Philippine power sector planners in evaluating the impacts and opportunities associated with achieving high levels of variable RE penetration, the Department of Energy of the Philippines (DOE) and the United States Agency for International Development (USAID) have spearheaded this study along withmore » a group of modeling representatives from across the Philippine electricity industry, which seeks to characterize the operational impacts of reaching high solar and wind targets in the Philippine power system, with a specific focus on the integrated Luzon-Visayas grids.« less
Kukona, Anuenue; Tabor, Whitney
2011-01-01
The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
NEIBank: Genomics and bioinformatics resources for vision research
Peterson, Katherine; Gao, James; Buchoff, Patee; Jaworski, Cynthia; Bowes-Rickman, Catherine; Ebright, Jessica N.; Hauser, Michael A.; Hoover, David
2008-01-01
NEIBank is an integrated resource for genomics and bioinformatics in vision research. It includes expressed sequence tag (EST) data and sequence-verified cDNA clones for multiple eye tissues of several species, web-based access to human eye-specific SAGE data through EyeSAGE, and comprehensive, annotated databases of known human eye disease genes and candidate disease gene loci. All expression- and disease-related data are integrated in EyeBrowse, an eye-centric genome browser. NEIBank provides a comprehensive overview of current knowledge of the transcriptional repertoires of eye tissues and their relation to pathology. PMID:18648525
Integrated Information Systems Across the Weather-Climate Continuum
NASA Astrophysics Data System (ADS)
Pulwarty, R. S.; Higgins, W.; Nierenberg, C.; Trtanj, J.
2015-12-01
The increasing demand for well-organized (integrated) end-to-end research-based information has been highlighted in several National Academy studies, in IPCC Reports (such as the SREX and Fifth Assessment) and by public and private constituents. Such information constitutes a significant component of the "environmental intelligence" needed to address myriad societal needs for early warning and resilience across the weather-climate continuum. The next generation of climate research in service to the nation requires an even more visible, authoritative and robust commitment to scientific integration in support of adaptive information systems that address emergent risks and inform longer-term resilience strategies. A proven mechanism for resourcing such requirements is to demonstrate vision, purpose, support, connection to constituencies, and prototypes of desired capabilities. In this presentation we will discuss efforts at NOAA, and elsewhere, that: Improve information on how changes in extremes in key phenomena such as drought, floods, and heat stress impact management decisions for resource planning and disaster risk reduction Develop regional integrated information systems to address these emergent challenges, that integrate observations, monitoring and prediction, impacts assessments and scenarios, preparedness and adaptation, and coordination and capacity-building. Such systems, as illustrated through efforts such as NIDIS, have strengthened the integration across the foundational research enterprise (through for instance, RISAs, Modeling Analysis Predictions and Projections) by increasing agility for responding to emergent risks. The recently- initiated Climate Services Information System, in support of the WMO Global Framework for Climate Services draws on the above models and will be introduced during the presentation.
Compact, self-contained enhanced-vision system (EVS) sensor simulator
NASA Astrophysics Data System (ADS)
Tiana, Carlo
2007-04-01
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.
Computer Vision Research and its Applications to Automated Cartography
1985-09-01
D Scene Geometry Thomas M. Strat and Martin A. Fischler Appendix D A New Sense for Depth of Field Alex P. Pentland iv 9.* qb CONTENTS (cont’d...D modeling. A. Baseline Stereo System As a framework for integration and evaluation of our research in modeling * 3-D scene geometry , as well as a...B. New Methods for Stereo Compilation As we previously indicated, the conventional approach to recovering scene geometry from a stereo pair of
Robust pedestrian detection and tracking from a moving vehicle
NASA Astrophysics Data System (ADS)
Tuong, Nguyen Xuan; Müller, Thomas; Knoll, Alois
2011-01-01
In this paper, we address the problem of multi-person detection, tracking and distance estimation in a complex scenario using multi-cameras. Specifically, we are interested in a vision system for supporting the driver in avoiding any unwanted collision with the pedestrian. We propose an approach using Histograms of Oriented Gradients (HOG) to detect pedestrians on static images and a particle filter as a robust tracking technique to follow targets from frame to frame. Because the depth map requires expensive computation, we extract depth information of targets using Direct Linear Transformation (DLT) to reconstruct 3D-coordinates of correspondent points found by running Speeded Up Robust Features (SURF) on two input images. Using the particle filter the proposed tracker can efficiently handle target occlusions in a simple background environment. However, to achieve reliable performance in complex scenarios with frequent target occlusions and complex cluttered background, results from the detection module are integrated to create feedback and recover the tracker from tracking failures due to the complexity of the environment and target appearance model variability. The proposed approach is evaluated on different data sets both in a simple background scenario and a cluttered background environment. The result shows that, by integrating detector and tracker, a reliable and stable performance is possible even if occlusion occurs frequently in highly complex environment. A vision-based collision avoidance system for an intelligent car, as a result, can be achieved.
A Policy Guide on Integrated Care (PGIC): Lessons Learned from EU Project INTEGRATE and Beyond
Devroey, Dirk
2017-01-01
Efforts are underway in many European countries to channel efforts into creating improved integrated health and social care services. But most countries lack a strategic plan that is sustainable over time, and that reflects a comprehensive systems perspective. The Policy Guide on Integrated Care (PGIC) as presented in this paper resulted from experiences with the EU Project INTEGRATE and our own work with healthcare reform for patients with chronic conditions at the national and international level. This project is one of the largest EU funded projects on Integrated Care, conducted over a four-year period (2012–2016) and included partners from nine European countries. Project Integrate aimed to gain insights into the leadership, management and delivery of integrated care to support European care systems to respond to the challenges of ageing populations and the rise of people living with long-term conditions. The objective of this paper is to describe the PGIC as both a tool and a reasoning flow that aims at supporting policy makers at the national and international level with the development and implementation of integrated care. Any Policy Guide on Integrated should build upon three building blocks, being a mission, vision and a strategy that aim at capturing the large amount of factors that directly or indirectly influence the successful development of integrated care. PMID:29588631
A Policy Guide on Integrated Care (PGIC): Lessons Learned from EU Project INTEGRATE and Beyond.
Borgermans, Liesbeth; Devroey, Dirk
2017-09-25
Efforts are underway in many European countries to channel efforts into creating improved integrated health and social care services. But most countries lack a strategic plan that is sustainable over time, and that reflects a comprehensive systems perspective. The Policy Guide on Integrated Care (PGIC) as presented in this paper resulted from experiences with the EU Project INTEGRATE and our own work with healthcare reform for patients with chronic conditions at the national and international level. This project is one of the largest EU funded projects on Integrated Care, conducted over a four-year period (2012-2016) and included partners from nine European countries. Project Integrate aimed to gain insights into the leadership, management and delivery of integrated care to support European care systems to respond to the challenges of ageing populations and the rise of people living with long-term conditions. The objective of this paper is to describe the PGIC as both a tool and a reasoning flow that aims at supporting policy makers at the national and international level with the development and implementation of integrated care. Any Policy Guide on Integrated should build upon three building blocks, being a mission, vision and a strategy that aim at capturing the large amount of factors that directly or indirectly influence the successful development of integrated care.
2015 Enterprise Strategic Vision
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-08-01
This document aligns with the Department of Energy Strategic Plan for 2014-2018 and provides a framework for integrating our missions and direction for pursuing DOE’s strategic goals. The vision is a guide to advancing world-class science and engineering, supporting our people, modernizing our infrastructure, and developing a management culture that operates a safe and secure enterprise in an efficient manner.
ERIC Educational Resources Information Center
Gryskiewicz, Stanley S., Ed.
The conference proceedings contain the following papers: "Hard Organizational Development" (Anthony); "Positive Impact of Humor in the Workplace or TQM (Total Quality Mirth) in Organizations" (Collier); "Introducing the Integrated Programme for the Creative Training of Leaders" (Diaz-Carrera); "Vision of Quality versus the Quality Vision" (Green);…
Integrating "Vision and Change" into a Biology Curriculum at a Small Comprehensive College
ERIC Educational Resources Information Center
Raimondi, Stacey L.; Marsh, Tamara L.; Arriola, Paul E.
2014-01-01
"Vision and Change," a publication by the American Association for the Advancement of Science, has illustrated the need for curricular change within biology departments across the nation. Yet despite this apparent need for change, many institutions have been slow to move for a number of reasons, perhaps most significant among them is a…
Study of the Peculiarities of Color Vision in the Course of "Biophysics" in a Pedagogical University
ERIC Educational Resources Information Center
Petrova, Elena Borisovna; Sabirova, Fairuza Musovna
2016-01-01
The article substantiates the necessity of studying the peculiarities of color vision of human in the course "Biophysics" that have been integrated into many types of higher education institutions. It describes the experience of teaching this discipline in a pedagogical higher education institution. The article presents a brief review of…
78 FR 1216 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... Group Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: February 7... Technology A Study Section. Date: February 7-8, 2013. Time: 8:00 a.m. to 5:00 p.m. Agenda: To review and... Integrated Review Group, Cellular Aspects of Diabetes and Obesity Study Section. Date: February 7, 2013. Time...
Selective attention in multi-chip address-event systems.
Bartolozzi, Chiara; Indiveri, Giacomo
2009-01-01
Selective attention is the strategy used by biological systems to cope with the inherent limits in their available computational resources, in order to efficiently process sensory information. The same strategy can be used in artificial systems that have to process vast amounts of sensory data with limited resources. In this paper we present a neuromorphic VLSI device, the "Selective Attention Chip" (SAC), which can be used to implement these models in multi-chip address-event systems. We also describe a real-time sensory-motor system, which integrates the SAC with a dynamic vision sensor and a robotic actuator. We present experimental results from each component in the system, and demonstrate how the complete system implements a real-time stimulus-driven selective attention model.
The Ontology of Vision. The Invisible, Consciousness of Living Matter
Fiorio, Giorgia
2016-01-01
If I close my eyes, the absence of light activates the peripheral cells devoted to the perception of darkness. The awareness of “seeing oneself seeing” is in its essence a thought, one that is internal to the vision and previous to any object of sight. To this amphibious faculty, the “diaphanous color of darkness,” Aristotle assigns the principle of knowledge. “Vision is a whole perceptual system, not a channel of sense.” Functions of vision are interwoven with the texture of human interaction within a terrestrial environment that is in turn contained into the cosmic order. A transitive host within the resonance of an inner-outer environment, the human being is the contact-term between two orders of scale, both bigger and smaller than the individual unity. In the perceptual integrative system of human vision, the convergence-divergence of the corporeal presence and the diffraction of its own appearance is the margin. The sensation of being no longer coincides with the breath of life, it does not seems “real” without the trace of some visible evidence and its simultaneous “sharing”. Without a shadow, without an imprint, the numeric copia of the physical presence inhabits the transient memory of our electronic prostheses. A rudimentary “visuality” replaces tangible experience dissipating its meaning and the awareness of being alive. Transversal to the civilizations of the ancient world, through different orders of function and status, the anthropomorphic “figuration” of archaic sculpture addressees the margin between Being and Non-Being. Statuary human archetypes are not meant to be visible, but to exist as vehicles of transcendence to outlive the definition of human space-time. The awareness of individual finiteness seals the compulsion to “give body” to an invisible apparition shaping the figuration of an ontogenetic expression of human consciousness. Subject and object, the term “humanum” fathoms the relationship between matter and its living dimension, “this de facto vision and the ‘there is’ which it contains.” The project reconsiders the dialectic between the terms vision–presence in the contemporary perception of archaic human statuary according to the transcendent meaning of its immaterial legacy. PMID:27014106
Management Practices and Tools: 2000-2004
NASA Technical Reports Server (NTRS)
2004-01-01
This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic is divided into four parts and covers the adoption of proven personnel and management reforms to implement the national space exploration vision, including the use of "system-of-systems" approach; policies of spiral, evolutionary development; reliance upon lead systems integrators; and independent technical and cost assessments. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.
Consensus report on the future of animal-free systemic toxicity testing.
Leist, Marcel; Hasiwa, Nina; Rovida, Costanza; Daneshian, Mardas; Basketter, David; Kimber, Ian; Clewell, Harvey; Gocht, Tilman; Goldberg, Alan; Busquet, Francois; Rossi, Anna-Maria; Schwarz, Michael; Stephens, Martin; Taalman, Rob; Knudsen, Thomas B; McKim, James; Harris, Georgina; Pamies, David; Hartung, Thomas
2014-01-01
Since March 2013, animal use for cosmetics testing for the European market has been banned. This requires a renewed view on risk assessment in this field. However, in other fields as well, traditional animal experimentation does not always satisfy requirements in safety testing, as the need for human-relevant information is ever increasing. A general strategy for animal-free test approaches was outlined by the US National Research Council`s vision document for Toxicity Testing in the 21st Century in 2007. It is now possible to provide a more defined roadmap on how to implement this vision for the four principal areas of systemic toxicity evaluation: repeat dose organ toxicity, carcinogenicity, reproductive toxicity and allergy induction (skin sensitization), as well as for the evaluation of toxicant metabolism (toxicokinetics) (Fig. 1). CAAT-Europe assembled experts from Europe, America and Asia to design a scientific roadmap for future risk assessment approaches and the outcome was then further discussed and refined in two consensus meetings with over 200 stakeholders. The key recommendations include: focusing on improving existing methods rather than favoring de novo design; combining hazard testing with toxicokinetics predictions; developing integrated test strategies; incorporating new high content endpoints to classical assays; evolving test validation procedures; promoting collaboration and data-sharing of different industrial sectors; integrating new disciplines, such as systems biology and high throughput screening; and involving regulators early on in the test development process. A focus on data quality, combined with increased attention to the scientific background of a test method, will be important drivers. Information from each test system should be mapped along adverse outcome pathways. Finally, quantitative information on all factors and key events will be fed into systems biology models that allow a probabilistic risk assessment with flexible adaptation to exposure scenarios and individual risk factors.
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
Machine vision systems using machine learning for industrial product inspection
NASA Astrophysics Data System (ADS)
Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony
2002-02-01
Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.