NASA Technical Reports Server (NTRS)
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
Sarriot, Eric G; Kouletio, Michelle; Jahan, Dr Shamim; Rasul, Izaz; Musha, Akm
2014-08-26
Starting in 1999, Concern Worldwide Inc. (Concern) worked with two Bangladeshi municipal health departments to support delivery of maternal and child health preventive services. A mid-term evaluation identified sustainability challenges. Concern relied on systems thinking implicitly to re-prioritize sustainability, but stakeholders also required a method, an explicit set of processes, to guide their decisions and choices during and after the project. Concern chose the Sustainability Framework method to generate creative thinking from stakeholders, create a common vision, and monitor progress. The Framework is based on participatory and iterative steps: defining (mapping) the local system and articulating a long-term vision, describing scenarios for achieving the vision, defining the elements of the model, and selecting corresponding indicators, setting and executing an assessment plan,, and repeated stakeholder engagement in analysis and decisions . Formal assessments took place up to 5 years post-project (2009). Strategic choices for the project were guided by articulating a collective vision for sustainable health, mapping the system of actors required to effect and sustain change, and defining different components of analysis. Municipal authorities oriented health teams toward equity-oriented service delivery efforts, strengthening of the functionality of Ward Health Committees, resource leveraging between municipalities and the Ministry of Health, and mitigation of contextual risks. Regular reference to a vision (and set of metrics (population health, organizational and community capacity) mitigated political factors. Key structures and processes were maintained following elections and political changes. Post-project achievements included the maintenance or improvement 5 years post-project (2009) in 9 of the 11 health indicator gains realized during the project (1999-2004). Some elements of performance and capacity weakened, but reductions in the equity gap achieved during the project were largely maintained post-project. Sustainability is dynamic and results from local systems processes, which can be strengthened through both implicit and explicit systems thinking steps applied with constancy of purpose.
Rapid matching of stereo vision based on fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei
2016-09-01
As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.
Flight Testing an Integrated Synthetic Vision System
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.
Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi
2016-10-12
Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal's retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.
Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi
2016-01-01
Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals. PMID:27731346
NASA Astrophysics Data System (ADS)
Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi
2016-10-01
Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.
NASA Astrophysics Data System (ADS)
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.
Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.
2005-01-01
Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.
NASA Technical Reports Server (NTRS)
Taylor, J. H.
1973-01-01
Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2007-01-01
The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.
Development of a volumetric projection technique for the digital evaluation of field of view.
Marshall, Russell; Summerskill, Stephen; Cook, Sharon
2013-01-01
Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.
NASA Astrophysics Data System (ADS)
Stetson, Suzanne; Weber, Hadley; Crosby, Frank J.; Tinsley, Kenneth; Kloess, Edmund; Nevis, Andrew J.; Holloway, John H., Jr.; Witherspoon, Ned H.
2004-09-01
The Airborne Littoral Reconnaissance Technologies (ALRT) project has developed and tested a nighttime operational minefield detection capability using commercial off-the-shelf high-power Laser Diode Arrays (LDAs). The Coastal System Station"s ALRT project, under funding from the Office of Naval Research (ONR), has been designing, developing, integrating, and testing commercial arrays using a Cessna airborne platform over the last several years. This has led to the development of the Airborne Laser Diode Array Illuminator wide field-of-view (ALDAI-W) imaging test bed system. The ALRT project tested ALDAI-W at the Army"s Night Vision Lab"s Airborne Mine Detection Arid Test. By participating in Night Vision"s test, ALRT was able to collect initial prototype nighttime operational data using ALDAI-W, showing impressive results and pioneering the way for final test bed demonstration conducted in September 2003. This paper describes the ALDAI-W Arid Test and results, along with processing steps used to generate imagery.
Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)
NASA Astrophysics Data System (ADS)
Ashcraft, Todd W.; Atac, Robert
2012-06-01
Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.
Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano
2018-01-31
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System
Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano
2018-01-01
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Saiyed, Naseem H.; Swith, Marion Shayne
2005-01-01
When United States President George W. Bush announced the Vision for Space Exploration in January 2004, twelve propulsion and launch system projects were being pursued in the Next Generation Launch Technology (NGLT) Program. These projects underwent a review for near-term relevance to the Vision. Subsequently, five projects were chosen as advanced development projects by NASA s Exploration Systems Mission Directorate (ESMD). These five projects were Auxiliary Propulsion, Integrated Powerhead Demonstrator, Propulsion Technology and Integration, Vehicle Subsystems, and Constellation University Institutes. Recently, an NGLT effort in Vehicle Structures was identified as a gap technology that was executed via the Advanced Development Projects Office within ESMD. For all of these advanced development projects, there is an emphasis on producing specific, near-term technical deliverables related to space transportation that constitute a subset of the promised NGLT capabilities. The purpose of this paper is to provide a brief description of the relevancy review process and provide a status of the aforementioned projects. For each project, the background, objectives, significant technical accomplishments, and future plans will be discussed. In contrast to many of the current ESMD activities, these areas are providing hardware and testing to further develop relevant technologies in support of the Vision for Space Exploration.
Notes from a clinical information system program manager. A solid vision makes all the difference.
Staggers, N
1997-01-01
Today's CIS manager will create a vision that connects computerization in ambulatory, home and community-based care with increased responsibility for patients to assume self-care. Patients will be faced with a glut of information and they will need nursing help in determining the validity of information. The new vision in this environment will focus on integration, interoperability, and a new definition for patient-centered information. Creating a well-articulated vision is the first skill in the repertoire of a CIS manager's tool set. A vision provides the firm structure upon which the entire project can be built, and provides for links to life-cycle planning. This first step in project planning begins to bring order to the chaos of dynamic demands in clinical computing.
Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects
NASA Technical Reports Server (NTRS)
Montes, Leticia; Bowers, David; Lumia, Ron
1998-01-01
This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.
Robotics research projects report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsia, T.C.
The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)
NASA Technical Reports Server (NTRS)
1995-01-01
NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.
Machine vision for real time orbital operations
NASA Technical Reports Server (NTRS)
Vinz, Frank L.
1988-01-01
Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).
A Detailed Evaluation of a Laser Triangulation Ranging System for Mobile Robots
1983-08-01
System Accuracy Factors ..................10 2.1.2 Detector "Cone of Vision" Problem ..................... 10 2. 1.3 Laser Triangulation Justification... product of these advances. Since 1968, when the effort began under a NASA grant, the project has undergone many changes both in the design goals and in...MD Vision System Accuracy Factors The accuracy of the data obtained by triangulation system depends on essentially three independent factors . They
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
Protyping machine vision software on the World Wide Web
NASA Astrophysics Data System (ADS)
Karantalis, George; Batchelor, Bruce G.
1998-10-01
Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.
Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.
2008-01-01
NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Qualifications of drivers - vision and diabetes
DOT National Transportation Integrated Search
2011-01-01
San Francisco UPA projects focus on reducing traffic congestion related to parking in downtown San Francisco. Intelligent transportation systems (ITS) technologies underlie many of the San Francisco UPA projects, including parking and roadway sensors...
Night vision goggle stimulation using LCoS and DLP projection technology, which is better?
NASA Astrophysics Data System (ADS)
Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Implementing the President's Vision: JPL and NASA's Exploration Systems Mission Directorate
NASA Technical Reports Server (NTRS)
Sander, Michael J.
2006-01-01
As part of the NASA team the Jet Propulsion Laboratory is involved in the Exploration Systems Mission Directorate (ESMD) work to implement the President's Vision for Space exploration. In this slide presentation the roles that are assigned to the various NASA centers to implement the vision are reviewed. The plan for JPL is to use the Constellation program to advance the combination of science an Constellation program objectives. JPL's current participation is to contribute systems engineering support, Command, Control, Computing and Information (C3I) architecture, Crew Exploration Vehicle, (CEV) Thermal Protection System (TPS) project support/CEV landing assist support, Ground support systems support at JSC and KSC, Exploration Communication and Navigation System (ECANS), Flight prototypes for cabin atmosphere instruments
Technical Challenges in the Development of a NASA Synthetic Vision System Concept
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III
2002-01-01
Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
An Rx for 20/20 Vision: Vision Planning and Education.
ERIC Educational Resources Information Center
Chrisman, Gerald J.; Holliday, Clifford R.
1996-01-01
Discusses the Dallas Independent School District's decision to adopt an integrated technology infrastructure and the importance of vision planning for long term goals. Outlines the vision planning process: first draft; environmental projection; restatement of vision in terms of market projections, anticipated customer needs, suspected competitor…
Project Magnify: Increasing Reading Skills in Students with Low Vision
ERIC Educational Resources Information Center
Farmer, Jeanie; Morse, Stephen E.
2007-01-01
Modeled after Project PAVE (Corn et al., 2003) in Tennessee, Project Magnify is designed to test the idea that students with low vision who use individually prescribed magnification devices for reading will perform as well as or better than students with low vision who use large-print reading materials. Sixteen students with low vision were…
Two-Phase Flow Technology Developed and Demonstrated for the Vision for Exploration
NASA Technical Reports Server (NTRS)
Sankovic, John M.; McQuillen, John B.; Lekan, Jack F.
2005-01-01
NASA s vision for exploration will once again expand the bounds of human presence in the universe with planned missions to the Moon and Mars. To attain the numerous goals of this vision, NASA will need to develop technologies in several areas, including advanced power-generation and thermal-control systems for spacecraft and life support. The development of these systems will have to be demonstrated prior to implementation to ensure safe and reliable operation in reduced-gravity environments. The Two-Phase Flow Facility (T(PHI) FFy) Project will provide the path to these enabling technologies for critical multiphase fluid products. The safety and reliability of future systems will be enhanced by addressing focused microgravity fluid physics issues associated with flow boiling, condensation, phase separation, and system stability, all of which are essential to exploration technology. The project--a multiyear effort initiated in 2004--will include concept development, normal-gravity testing (laboratories), reduced gravity aircraft flight campaigns (NASA s KC-135 and C-9 aircraft), space-flight experimentation (International Space Station), and model development. This project will be implemented by a team from the NASA Glenn Research Center, QSS Group, Inc., ZIN Technologies, Inc., and the Extramural Strategic Research Team composed of experts from academia.
Looking above the prairie: localized and upward acute vision in a native grassland bird.
Tyrrell, Luke P; Moore, Bret A; Loftis, Christopher; Fernández-Juricic, Esteban
2013-12-02
Visual systems of open habitat vertebrates are predicted to have a band of acute vision across the retina (visual streak) and wide visual coverage to gather information along the horizon. We tested whether the eastern meadowlark (Sturnella magna) had this visual configuration given that it inhabits open grasslands. Contrary to our expectations, the meadowlark retina has a localized spot of acute vision (fovea) and relatively narrow visual coverage. The fovea projects above rather than towards the horizon with the head at rest, and individuals modify their body posture in tall grass to maintain a similar foveal projection. Meadowlarks have relatively large binocular fields and can see their bill tips, which may help with their probe-foraging technique. Overall, meadowlark vision does not fit the profile of vertebrates living in open habitats. The binocular field may control foraging while the fovea may be used for detecting and tracking aerial stimuli (predators, conspecifics).
NASA Astrophysics Data System (ADS)
Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki
2006-01-01
In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-08-01
This article reports that there are literally hundreds of machine vision systems from which to choose. They range in cost from $10,000 to $1,000,000. Most have been designed for specific applications; the same systems if used for a different application may fail dismally. How can you avoid wasting money on inferior, useless, or nonexpandable systems. A good reference is the Automated Vision Association in Ann Arbor, Mich., a trade group comprised of North American machine vision manufacturers. Reputable suppliers caution users to do their homework before making an investment. Important considerations include comprehensive details on the objects to be viewed-thatmore » is, quantity, shape, dimension, size, and configuration details; lighting characteristics and variations; component orientation details. Then, what do you expect the system to do-inspect, locate components, aid in robotic vision. Other criteria include system speed and related accuracy and reliability. What are the projected benefits and system paybacks.. Examine primarily paybacks associated with scrap and rework reduction as well as reduced warranty costs.« less
Institute for Aviation Research and Development Research Project
1989-01-01
Symbolics Artificial Intelligence * Vision Systems * Finite Element Modeling ( NASTRAN ) * Aerodynamic Paneling (VSAERO) Projects: * Software...34Wall Functions for k and epsilon for Turbulent Flow Through Rough and Smooth Pipes," Eleventh International Symposium on Turbulence, October 17-19, 1988
Liquid lens: advances in adaptive optics
NASA Astrophysics Data System (ADS)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
Audio from Orbit: The Future of Libraries for Individuals Who Are Blind or Vision Impaired
ERIC Educational Resources Information Center
Steer, Michael; Cheetham, Leonie
2005-01-01
Free library service is a component of the foundations of democracy, citizenship, economic and social development, scholarship and education, in progressive societies. The evolution of libraries for people who are blind or vision impaired is briefly discussed and an innovative project, a talking book and daily newspaper delivery system that…
Portable Common Execution Environment (PCEE) project review: Peer review
NASA Technical Reports Server (NTRS)
Locke, C. Douglass
1991-01-01
The purpose of the review was to conduct an independent, in-depth analysis of the PCEE project and to provide the results of said review. The review team was tasked with evaluating the potential contribution of the PCEE project to the improvement of the life cycle support of mission and safety critical (MASC) computing components for large, complex, non-stop, distributed systems similar to those planned for such NASA programs as the space station, lunar outpost, and manned missions to Mars. Some conclusions of the review team are as follow: The PCEE project was given high marks for its breath of vision on the overall problem with MASC software; Correlated with the sweeping vision, the Review Team is very skeptical that any research project can successfully attack such a broad range of problems; and several recommendations are made such as to identify the components of the broad solution envisioned, prioritizing them with respect to their impact and the likely ability of the PCEE or others to attack them successfully, and to rewrite its Concept Document differentiating the problem description, objectives, approach, and results so that the project vision becomes assessible to others.
Vision technology/algorithms for space robotics applications
NASA Technical Reports Server (NTRS)
Krishen, Kumar; Defigueiredo, Rui J. P.
1987-01-01
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.
CAD-model-based vision for space applications
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.
1988-01-01
A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.
NASA Technical Reports Server (NTRS)
Rhodes, Bradley; Meck, Janice
2005-01-01
NASA s National Vision for Space Exploration includes human travel beyond low earth orbit and the ultimate safe return of the crews. Crucial to fulfilling the vision is the successful and timely development of countermeasures for the adverse physiological effects on human systems caused by long term exposure to the microgravity environment. Limited access to in-flight resources for the foreseeable future increases NASA s reliance on ground-based analogs to simulate these effects of microgravity. The primary analog for human based research will be head-down bed rest. By this approach NASA will be able to evaluate countermeasures in large sample sizes, perform preliminary evaluations of proposed in-flight protocols and assess the utility of individual or combined strategies before flight resources are requested. In response to this critical need, NASA has created the Bed Rest Project at the Johnson Space Center. The Project establishes the infrastructure and processes to provide a long term capability for standardized domestic bed rest studies and countermeasure development. The Bed Rest Project design takes a comprehensive, interdisciplinary, integrated approach that reduces the resource overhead of one investigator for one campaign. In addition to integrating studies operationally relevant for exploration, the Project addresses other new Vision objectives, namely: 1) interagency cooperation with the NIH allows for Clinical Research Center (CRC) facility sharing to the benefit of both agencies, 2) collaboration with our International Partners expands countermeasure development opportunities for foreign and domestic investigators as well as promotes consistency in approach and results, 3) to the greatest degree possible, the Project also advances research by clinicians and academia alike to encourage return to earth benefits. This paper will describe the Project s top level goals, organization and relationship to other Exploration Vision Projects, implementation strategy, address Project deliverables, schedules and provide a status of bed rest campaigns presently underway.
Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III
2006-01-01
NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.
Martínez-Bueso, Pau; Moyà-Alcover, Biel
2014-01-01
Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310
System of error detection in the manufacture of garments using artificial vision
NASA Astrophysics Data System (ADS)
Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.
2017-12-01
A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.
New approach for teaching health promotion in the community: integration of three nursing courses.
Moshe-Eilon, Yael; Shemy, Galia
2003-07-01
The complexity of the health care system and its interdisciplinary nature require that each component of the system redefine its professional framework, relative advantage, and unique contribution as an independent discipline. In choosing the most efficient and cost-effective work-force, each profession in the health care system must clarify its importance and contribution, otherwise functions will overlap and financial resources will be wasted. As rapid and wide-ranging changes occur in the health care system, the nursing profession must display a new and comprehensive vision that projects its values, beliefs, and relationships with and commitment to both patients and coworkers. The plans to fulfill this vision must be described clearly. This article presents part of a new professional paradigm developed by the nursing department of the University of Haifa, Israel. Three main topics are addressed: The building blocks of the new vision (i.e., community and health promotion, managerial skills, academic research). Integration of the building blocks into the 4-year baccalaureate degree program (i.e., how to practice health promotion with students in the community setting; managerial nursing skills at the baccalaureate level, including which to choose and to what depth and how to teach them; and academic nursing research, including the best way to teach basic research skills and implement them via a community project). Two senior student projects, demonstrating practical linking of the building blocks.
A Model for Integrating Low Vision Services into Educational Programs.
ERIC Educational Resources Information Center
Jose, Randall T.; And Others
1988-01-01
A project integrating low-vision services into children's educational programs comprised four components: teacher training, functional vision evaluations for each child, a clinical examination by an optometrist, and follow-up visits with the optometrist to evaluate the prescribed low-vision aids. Educational implications of the project and project…
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.
2006-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results showed the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
Computing Visible-Surface Representations,
1985-03-01
Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: i) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; To study the fine structure of insect flight trajectories with in order to better understand the characteristics of flight control, orientation and navigation.
Insect-Based Vision for Autonomous Vehicles: A Feasibility Study
NASA Technical Reports Server (NTRS)
Srinivasan, Mandyam V.
1999-01-01
The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, tours the Orbiter Processing Facility on a visit to KSC. At right (hands up) is Conrad Nagel, chief of the Shuttle Project Office. They are standing under the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
Making sausage--effective management of enterprise-wide clinical IT projects.
Smaltz, Detlev H; Callander, Rhonda; Turner, Melanie; Kennamer, Gretchen; Wurtz, Heidi; Bowen, Alan; Waldrum, Mike R
2005-01-01
Unlike most other industries in which company employees are, well, company employees, U.S. hospitals are typically run by both employees (nurses, technicians, and administrative staff) and independent entrepreneurs (physicians and nurse practitioners). Therefore, major enterprise-wide clinical IT projects can never simply be implemented by mandate. Project management processes in these environments must rely on methods that influence adoption rather than presume adoption will occur. "Build it and they will come" does not work in a hospital setting. This paper outlines a large academic medical center's experiences in managing an enterprise-wide project to replace its core clinical systems functionality. Best practices include developing a cogent optimal future-state vision, communications planning and execution, vendor validation against the optimal future-state vision, and benefits realization assessment.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.
Final Report on Video Log Data Mining Project
DOT National Transportation Integrated Search
2012-06-01
This report describes the development of an automated computer vision system that identities and inventories road signs : from imagery acquired from the Kansas Department of Transportations road profiling system that takes images every 26.4 : feet...
Vision based flight procedure stereo display system
NASA Astrophysics Data System (ADS)
Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng
2008-03-01
A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.
Machine Vision Applied to Navigation of Confined Spaces
NASA Technical Reports Server (NTRS)
Briscoe, Jeri M.; Broderick, David J.; Howard, Ricky; Corder, Eric L.
2004-01-01
The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, tours the Orbiter Processing Facility on a visit to KSC. At left is Conrad Nagel, chief of the Shuttle Project Office. They are standing under the left wing and wheel well of the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, listens to Conrad Nagel, chief of the Shuttle Project Office (right), during a tour of the Orbiter Processing Facility on a visit to KSC. They are standing under the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, listens to Conrad Nagel, chief of the Shuttle Project Office (right), during a tour of the Orbiter Processing Facility on a visit to KSC. They are standing under the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
Humanoids for lunar and planetary surface operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing
2005-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.
Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin
2017-03-27
Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.
A vision fusion treatment system based on ATtiny26L
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang
2006-11-01
Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir
2014-06-01
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
An augmented-reality edge enhancement application for Google Glass.
Hwang, Alex D; Peli, Eli
2014-08-01
Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.
Vision and Voyages: Lessons Learned from the Planetary Decadal Survey
NASA Astrophysics Data System (ADS)
Squyres, S. W.
2015-12-01
The most recent planetary decadal survey, entitled Vision and Voyages for Planetary Science in the Decade 2013-2022, provided a detailed set of priorities for solar system exploration. Those priorities drew on broad input from the U.S. and international planetary science community. Using white papers, town hall meetings, and open meetings of the decadal committees, community views were solicited and a consensus began to emerge. The final report summarized that consensus. Like many past decadal reports, the centerpiece of Vision and Voyages was a set of priorities for future space flight projects. Two things distinguished this report from some previous decadals. First, conservative and independent cost estimates were obtained for all of the projects that were considered. These independent cost estimates, rather than estimates generated by project advocates, were used to judge each project's expected science return per dollar. Second, rather than simply accepting NASA's ten-year projection of expected funding for planetary exploration, decision rules were provided to guide program adjustments if actual funding did not follow projections. To date, NASA has closely followed decadal recommendations. In particular, the two highest priority "flagship" missions, a Mars rover to collect samples for return to Earth and a mission to investigate a possible ocean on Europa, are both underway. The talk will describe the planetary decadal process in detail, and provide a more comprehensive assessment of NASA's response to it.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Hughes, Monica F.; Arthur, Jarvis J., III; Kramer, Lynda J.; Glaab, Louis J.; Bailey, Randy E.; Parrish, Russell V.; Uenking, Michael D.
2003-01-01
Because restricted visibility has been implicated in the majority of commercial and general aviation accidents, solutions will need to focus on how to enhance safety during instrument meteorological conditions (IMC). The NASA Synthetic Vision Systems (SVS) project is developing technologies to help achieve these goals through the synthetic presentation of how the outside world would look to the pilot if vision were not reduced. The potential safety outcome would be a significant reduction in several accident categories, such as controlled-flight-into-terrain (CFIT), that have restricted visibility as a causal factor. The paper describes two experiments that demonstrated the efficacy of synthetic vision technology to prevent CFIT accidents for both general aviation and commercial aircraft.
The Miami-Dade Juvenile Assessment Center National Demonstration Project
ERIC Educational Resources Information Center
Walters, Wansley; Dembo, Richard; Beaulaurier, Richard; Cocozza, Joseph; De La Rosa, Mario; Poythress, Norman; Skowyra, Kathy; Veysey, Bonita M.
2005-01-01
The Miami-Dade Juvenile Assessment Center National Demonstration Project (NDP) is serving as a national model for the transformation of front end services in the juvenile justice system in a unique sociocultural setting.We discuss the background and vision of the NDP, its implementation and accomplishments in six major program areas: (1)…
Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision
ERIC Educational Resources Information Center
Prull, Matthew W.; Banks, William P.
2005-01-01
We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…
Chapter 7: Lessons, Conclusions, and Implications of the Saber-Tooth Project.
ERIC Educational Resources Information Center
Ward, Phillip; Doutis, Panayiotis; Evans, Sharon A.
1999-01-01
Summarizes findings from the Saber-Tooth Project related to systemic change and student learning, concluding that vision is everything; workplace conditions must be addressed at multiple levels; strong relationships exist among planning, teaching, and assessment; and improvement in reform may occur due to the cessation of business as usual. This…
2006-07-27
unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry
A position and attitude vision measurement system for wind tunnel slender model
NASA Astrophysics Data System (ADS)
Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi
2014-11-01
A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.
Outline Guide to Educational Reform Initiatives. ERS Research Digest.
ERIC Educational Resources Information Center
Educational Research Service, Arlington, VA.
Many educational reform initiatives are being tried in an effort to restructure the American school system. This guide compares major educational reform efforts by goal, vision, teaching and learning, and system components. The first section of the guide covers major systemic educational reform initiatives, including Accelerated Schools Project,…
The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis
NASA Astrophysics Data System (ADS)
Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.
2013-07-01
This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.
Humanoids in Support of Lunar and Planetary Surface Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier
2006-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.
Special Technology Area Review on Micro-Opto-Electro-Mechanical-Systems (MOEMS)
1997-12-01
Optical Interference in Night Vision Systems "* DMD Assisted Intelligent Manufacturing of ................................................... SRI...CONCEPT ......................................... p. 8 FIGURE 3(a): DMD LIGHT SWITCHES...p. 9 FIGURE 3(b): SEM PHOTOMICROGRAPHS OF DMD CHIPS ........................................ p. 9 FIGURE 4: MULTI-USER MEMS PROJECTS (MUMPS
Laser projection positioning of spatial contour curves via a galvanometric scanner
NASA Astrophysics Data System (ADS)
Tu, Junchao; Zhang, Liyan
2018-04-01
The technology of laser projection positioning is widely applied in advanced manufacturing fields (e.g. composite plying, parts location and installation). In order to use it better, a laser projection positioning (LPP) system is designed and implemented. Firstly, the LPP system is built by a laser galvanometric scanning (LGS) system and a binocular vision system. Applying Single-hidden Layer Feed-forward Neural Network (SLFN), the system model is constructed next. Secondly, the LGS system and the binocular system, which are respectively independent, are integrated through a datadriven calibration method based on extreme learning machine (ELM) algorithm. Finally, a projection positioning method is proposed within the framework of the calibrated SLFN system model. A well-designed experiment is conducted to verify the viability and effectiveness of the proposed system. In addition, the accuracy of projection positioning are evaluated to show that the LPP system can achieves the good localization effect.
Computer vision for automatic inspection of agricultural produce
NASA Astrophysics Data System (ADS)
Molto, Enrique; Blasco, Jose; Benlloch, Jose V.
1999-01-01
Fruit and vegetables suffer different manipulations from the field to the final consumer. These are basically oriented towards the cleaning and selection of the product in homogeneous categories. For this reason, several research projects, aimed at fast, adequate produce sorting and quality control are currently under development around the world. Moreover, it is possible to find manual and semi- automatic commercial system capable of reasonably performing these tasks.However, in many cases, their accuracy is incompatible with current European market demands, which are constantly increasing. IVIA, the Valencian Research Institute of Agriculture, located in Spain, has been involved in several European projects related with machine vision for real-time inspection of various agricultural produces. This paper will focus on the work related with two products that have different requirements: fruit and olives. In the case of fruit, the Institute has developed a vision system capable of providing assessment of the external quality of single fruit to a robot that also receives information from other senors. The system use four different views of each fruit and has been tested on peaches, apples and citrus. Processing time of each image is under 500 ms using a conventional PC. The system provides information about primary and secondary color, blemishes and their extension, and stem presence and position, which allows further automatic orientation of the fruit in the final box using a robotic manipulator. Work carried out in olives was devoted to fast sorting of olives for consumption at table. A prototype has been developed to demonstrate the feasibility of a machine vision system capable of automatically sorting 2500 kg/h olives using low-cost conventional hardware.
75 FR 51441 - Mid-Atlantic Fishery Management Council (MAFMC); Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-20
... development of the Visioning Project. DATES: The meeting will be held Thursday, September 9, 2010, from 10 a.m...: The purpose of this meeting is to begin the development of the Council's Visioning Project. The... project goals. The initial purpose of the project is to identify stakeholders' views on the management...
A study on integrating surveys of terrestrial natural resources: The Oregon Demonstration Project
J. Jeffery Goebel; Hans T. Schreuder; Carol C. House; Paul H. Geissler; Anthony R. Olsen; William Williams
1998-01-01
An interagency project demonstrated the feasibility of integrating Federal surveys of terrestrial natural resources and offers a vision for that integration. At locations selected from forest inventory and analysis, National forest system Region 6, and national resources inventory surveys in a six-county area in Northern Oregon, experienced teams interpreted and made...
The Research Path to the Virtual Class. ZIFF Papiere 105.
ERIC Educational Resources Information Center
Rajasingham, Lalita
This paper describes a project conducted in 1991-92, based on research conducted in 1986-87 that demonstrated the need for a telecommunications system with the capacity of integrated services digital networks (ISDN) that would allow for sound, vision, and integrated computer services. Called the Tri-Centre Project, it set out to explore, from the…
ERIC Educational Resources Information Center
Sanspree, M. J.; And Others
1991-01-01
This article describes the Vision Outreach Project--a pilot project of the University of Alabama at Birmingham for training teachers of visually impaired students. The project produced video modules to provide distance education in rural and urban areas. The modules can be used to complete degree requirements or in-service training and continuing…
Rotorcraft Conceptual Design Environment
2009-10-01
systems engineering design tool sets. The DaVinci Project vision is to develop software architecture and tools specifically for acquisition system...enable movement of that information to and from analyses. Finally, a recently developed rotorcraft system analysis tool is described. Introduction...information to and from analyses. Finally, a recently developed rotorcraft system analysis tool is described. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION
NASA Astrophysics Data System (ADS)
Hildreth, E. C.
1985-09-01
For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.
An Augmented-Reality Edge Enhancement Application for Google Glass
Hwang, Alex D.; Peli, Eli
2014-01-01
Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871
NASA Technical Reports Server (NTRS)
Lee, Meemong; Weidner, Richard J.
2008-01-01
The Juno spacecraft is planned to launch in August of 2012 and would arrive at Jupiter four years later. The spacecraft would spend more than one year orbiting the planet and investigating the existence of an ice-rock core; determining the amount of global water and ammonia present in the atmosphere, studying convection and deep- wind profiles in the atmosphere; investigating the origin of the Jovian magnetic field, and exploring the polar magnetosphere. Juno mission management is responsible for mission and navigation design, mission operation planning, and ground-data-system development. In order to ensure successful mission management from initial checkout to final de-orbit, it is critical to share a common vision of the entire mission operation phases with the rest of the project teams. Two major challenges are 1) how to develop a shared vision that can be appreciated by all of the project teams of diverse disciplines and expertise, and 2) how to continuously evolve a shared vision as the project lifecycle progresses from formulation phase to operation phase. The Juno mission simulation team addresses these challenges by developing agile and progressive mission models, operation simulations, and real-time visualization products. This paper presents mission simulation visualization network (MSVN) technology that has enabled a comprehensive mission simulation suite (MSVN-Juno) for the Juno project.
NASA Fixed Wing Project: Green Technologies for Future Aircraft Generation
NASA Technical Reports Server (NTRS)
DelRosario, Ruben
2014-01-01
The NASA Fundamental Aeronautics Fixed Wing (FW) Project addresses the comprehensive challenge of enabling revolutionary energy efficiency improvements in subsonic transport aircraft combined with dramatic reductions in harmful emissions and perceived noise to facilitate sustained growth of the air transportation system. Advances in multidisciplinary technologies and the development of unconventional aircraft systems offer the potential to achieve these improvements. The presentation will highlight the FW Project vision of revolutionary systems and technologies needed to achieve the challenging goals of aviation. Specifically, the primary focus of the FW Project is on the N+3 generation that is, vehicles that are three generations beyond the current state of the art, requiring mature technology solutions in the 2025-30 timeframe.
Using the auxiliary camera for system calibration of 3D measurement by digital speckle
NASA Astrophysics Data System (ADS)
Xue, Junpeng; Su, Xianyu; Zhang, Qican
2014-06-01
The study of 3D shape measurement by digital speckle temporal sequence correlation have drawn a lot of attention by its own advantages, however, the measurement mainly for depth z-coordinate, horizontal physical coordinate (x, y) are usually marked as image pixel coordinate. In this paper, a new approach for the system calibration is proposed. With an auxiliary camera, we made up the temporary binocular vision system, which are used for the calibration of horizontal coordinates (mm) while the temporal sequence reference-speckle-sets are calibrated. First, the binocular vision system has been calibrated using the traditional method. Then, the digital speckles are projected on the reference plane, which is moved by equal distance in the direction of depth, temporal sequence speckle images are acquired with camera as reference sets. When the reference plane is in the first position and final position, crossed fringe pattern are projected to the plane respectively. The control points of pixel coordinates are extracted by Fourier analysis from the images, and the physical coordinates are calculated by the binocular vision. The physical coordinates corresponding to each pixel of the images are calculated by interpolation algorithm. Finally, the x and y corresponding to arbitrary depth value z are obtained by the geometric formula. Experiments prove that our method can fast and flexibly measure the 3D shape of an object as point cloud.
Plutonium immobilization can loading FY99 component test report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.
2000-06-01
This report summarizes FY99 Can Loading work completed for the Plutonium Immobilization Project and it includes details about the Helium hood, cold pour cans, Can Loading robot, vision system, magnetically coupled ray cart and lifts, system integration, Can Loading glovebox layout, and an FY99 cost table.
Suzuki, Daichi G; Murakami, Yasunori; Escriva, Hector; Wada, Hiroshi
2015-02-01
Vertebrates are equipped with so-called camera eyes, which provide them with image-forming vision. Vertebrate image-forming vision evolved independently from that of other animals and is regarded as a key innovation for enhancing predatory ability and ecological success. Evolutionary changes in the neural circuits, particularly the visual center, were central for the acquisition of image-forming vision. However, the evolutionary steps, from protochordates to jaw-less primitive vertebrates and then to jawed vertebrates, remain largely unknown. To bridge this gap, we present the detailed development of retinofugal projections in the lamprey, the neuroarchitecture in amphioxus, and the brain patterning in both animals. Both the lateral eye in larval lamprey and the frontal eye in amphioxus project to a light-detecting visual center in the caudal prosencephalic region marked by Pax6, which possibly represents the ancestral state of the chordate visual system. Our results indicate that the visual system of the larval lamprey represents an evolutionarily primitive state, forming a link from protochordates to vertebrates and providing a new perspective of brain evolution based on developmental mechanisms and neural functions. © 2014 Wiley Periodicals, Inc.
2010-04-01
project are the establishment of a telemedicine system for comprehensive diabetes management and the assessment of diabetic retinopathy that...virtually eliminate diabetic retinopathy as a cause of severe vision loss. Nevertheless, diabetes remains the leading cause of new blindness in working...Eye care module DESCRIPTION: The primary questions are: What are the costs associated with diabetic retinopathy evaluations performed by an
Computer interfaces for the visually impaired
NASA Technical Reports Server (NTRS)
Higgins, Gerry
1991-01-01
Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.
Hubble Space Telescope: cost reduction by re-engineering telemetry processing and archiving
NASA Astrophysics Data System (ADS)
Miebach, Manfred P.
1998-05-01
The Hubble Space Telescope (HST), the first of NASA's Great Observatories, was launched on April 24, 1990. The HST was designed for a minimum fifteen-year mission with on-orbit servicing by the Space Shuttle System planned at approximately three-year intervals. Major changes to the HST ground system are planned to be in place for the third servicing mission in December 1999. The primary objectives of the ground system reengineering effort, a project called 'vision December 1999. The primary objectives of the ground system re-engineering effort, a project called 'vision 2000 control center systems (CCS)', are to reduce both development and operating costs significantly for the remaining years of HST's lifetime. Development costs will be reduced by providing a modern hardware and software architecture and utilizing commercial of f the shelf (COTS) products wherever possible. Operating costs will be reduced by eliminating redundant legacy systems and processes and by providing an integrated ground system geared toward autonomous operation. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on current Data Warehouse technology. The purpose of this data store is to provide a common data source of telemetry data for all HST subsystems. This data store will become the engineering data archive and will include a queryable database for the user to analyze HST telemetry. The access to the engineering data in the Data Warehouse is platform- independent from an office environment using commercial standards. Latest internet technology is used to reach the HST engineering community. A WEB-based user interface allows easy access to the data archives. This paper will provide a high level overview of the CCS system and will illustrate some of the CCS telemetry capabilities. Samples of CCS user interface pages will be given. Vision 2000 is an ambitious project, but one that is well under way. It will allow the HST program to realize reduced operations costs for the Third Servicing Mission and beyond.
NASA's First Year Progress with Fuel Cell Advanced Development in Support of the Exploration Vision
NASA Technical Reports Server (NTRS)
Hoberecht, Mark
2007-01-01
NASA Glenn Research Center (GRC), in collaboration with Johnson Space Center (JSC), the Jet Propulsion Laboratory (JPL), Kennedy Space Center (KSC), and industry partners, is leading a proton-exchange-membrane fuel cell (PEMFC) advanced development effort to support the vision for Exploration. This effort encompasses the fuel cell portion of the Energy Storage Project under the Exploration Technology Development Program, and is directed at multiple power levels for both primary and regenerative fuel cell systems. The major emphasis is the replacement of active mechanical ancillary components with passive components in order to reduce mass and parasitic power requirements, and to improve system reliability. A dual approach directed at both flow-through and non flow-through PEMFC system technologies is underway. A brief overview of the overall PEMFC project and its constituent tasks will be presented, along with in-depth technical accomplishments for the past year. Future potential technology development paths will also be discussed.
Breaking BAD: A Data Serving Vision for Big Active Data
Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.
2017-01-01
Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377
History Places: A Case Study for Relational Database and Information Retrieval System Design
ERIC Educational Resources Information Center
Hendry, David G.
2007-01-01
This article presents a project-based case study that was developed for students with diverse backgrounds and varied inclinations for engaging technical topics. The project, called History Places, requires that student teams develop a vision for a kind of digital library, propose a conceptual model, and use the model to derive a logical model and…
Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results
NASA Astrophysics Data System (ADS)
Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Research on the feature set construction method for spherical stereo vision
NASA Astrophysics Data System (ADS)
Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia
2015-01-01
Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.
Human Factors Engineering as a System in the Vision for Exploration
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Smith, Danielle; Holden, Kritina
2006-01-01
In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation in order to identify HSI requirements for the lunar communications architecture. Throughout these ConOps exercises, HFE personnel are testing various tools and methodologies that have been identified in the literature. A key part of the effort is the identification of optimal processes, methods, and tools for these early development phase activities, such as ConOps, requirements development, and early conceptual design. An overview of the activities completed thus far, as well as the tools and methods investigated will be presented.
A vision and strategy for the virtual physiological human in 2010 and beyond.
Hunter, Peter; Coveney, Peter V; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Skår, John; Tegner, Jesper; Randall Thomas, S; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H G M; Viceconti, Marco
2010-06-13
European funding under framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for nearly 2 years. The VPH network of excellence (NoE) is helping in the development of common standards, open-source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also helping to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by framework 6 strategy for a European physiome (STEP) project in 2006. It is now time to assess the accomplishments of the last 2 years and update the STEP vision for the VPH. We consider the biomedical science, healthcare and information and communications technology challenges facing the project and we propose the VPH Institute as a means of sustaining the vision of VPH beyond the time frame of the NoE.
A vision and strategy for the virtual physiological human in 2010 and beyond
Hunter, Peter; Coveney, Peter V.; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F.; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Skår, John; Tegner, Jesper; Randall Thomas, S.; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H. G. M.; Viceconti, Marco
2010-01-01
European funding under framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for nearly 2 years. The VPH network of excellence (NoE) is helping in the development of common standards, open-source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also helping to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by framework 6 strategy for a European physiome (STEP) project in 2006. It is now time to assess the accomplishments of the last 2 years and update the STEP vision for the VPH. We consider the biomedical science, healthcare and information and communications technology challenges facing the project and we propose the VPH Institute as a means of sustaining the vision of VPH beyond the time frame of the NoE. PMID:20439264
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
Robot and Human Surface Operations on Solar System Bodies
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Easter, R.; Rodriguez, G.
2001-01-01
This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.
NASA Astrophysics Data System (ADS)
Kim, J.
2016-12-01
Considering high levels of uncertainty, epistemological conflicts over facts and values, and a sense of urgency, normal paradigm-driven science will be insufficient to mobilize people and nation toward sustainability. The conceptual framework to bridge the societal system dynamics with that of natural ecosystems in which humanity operates remains deficient. The key to understanding their coevolution is to understand `self-organization.' Information-theoretic approach may shed a light to provide a potential framework which enables not only to bridge human and nature but also to generate useful knowledge for understanding and sustaining the integrity of ecological-societal systems. How can information theory help understand the interface between ecological systems and social systems? How to delineate self-organizing processes and ensure them to fulfil sustainability? How to evaluate the flow of information from data through models to decision-makers? These are the core questions posed by sustainability science in which visioneering (i.e., the engineering of vision) is an essential framework. Yet, visioneering has neither quantitative measure nor information theoretic framework to work with and teach. This presentation is an attempt to accommodate the framework of self-organizing hierarchical open systems with visioneering into a common information-theoretic framework. A case study is presented with the UN/FAO's communal vision of climate-smart agriculture (CSA) which pursues a trilemma of efficiency, mitigation, and resilience. Challenges of delineating and facilitating self-organizing systems are discussed using transdisciplinary toold such as complex systems thinking, dynamic process network analysis and multi-agent systems modeling. Acknowledgments: This study was supported by the Korea Meteorological Administration Research and Development Program under Grant KMA-2012-0001-A (WISE project).
Comparative Geometrical Accuracy Investigations of Hand-Held 3d Scanning Systems - AN Update
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Lindstaedt, M.; Starosta, D.
2018-05-01
Hand-held 3D scanning systems are increasingly available on the market from several system manufacturers. These systems are deployed for 3D recording of objects with different size in diverse applications, such as industrial reverse engineering, and documentation of museum exhibits etc. Typical measurement distances range from 0.5 m to 4.5 m. Although they are often easy-to-use, the geometric performance of these systems, especially the precision and accuracy, are not well known to many users. First geometrical investigations of a variety of diverse hand-held 3D scanning systems were already carried out by the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg (HCU Hamburg) in cooperation with two other universities in 2016. To obtain more information about the accuracy behaviour of the latest generation of hand-held 3D scanning systems, HCU Hamburg conducted further comparative geometrical investigations using structured light systems with speckle pattern (Artec Spider, Mantis Vision PocketScan 3D, Mantis Vision F5-SR, Mantis Vision F5-B, and Mantis Vision F6), and photogrammetric systems (Creaform HandySCAN 700 and Shining FreeScan X7). In the framework of these comparative investigations geometrically stable reference bodies were used. The appropriate reference data was acquired by measurements with two structured light projection systems (AICON smartSCAN and GOM ATOS I 2M). The comprehensive test results of the different test scenarios are presented and critically discussed in this contribution.
Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.
2014-01-01
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.
Machine vision methods for use in grain variety discrimination and quality analysis
NASA Astrophysics Data System (ADS)
Winter, Philip W.; Sokhansanj, Shahab; Wood, Hugh C.
1996-12-01
Decreasing cost of computer technology has made it feasible to incorporate machine vision technology into the agriculture industry. The biggest attraction to using a machine vision system is the computer's ability to be completely consistent and objective. One use is in the variety discrimination and quality inspection of grains. Algorithms have been developed using Fourier descriptors and neural networks for use in variety discrimination of barley seeds. RGB and morphology features have been used in the quality analysis of lentils, and probability distribution functions and L,a,b color values for borage dockage testing. These methods have been shown to be very accurate and have a high potential for agriculture. This paper presents the techniques used and results obtained from projects including: a lentil quality discriminator, a barley variety classifier, a borage dockage tester, a popcorn quality analyzer, and a pistachio nut grading system.
LexTran support project : strategic planning support for LexTran visioning.
DOT National Transportation Integrated Search
2005-09-01
In October 2003, LexTran, the City of Lexingtons public transportation provider, was undergoing a management transition. It sought the assistance of the Kentucky Transportation Center for strategic planning and visioning. This project produced fou...
PRoViScout: a planetary scouting rover demonstrator
NASA Astrophysics Data System (ADS)
Paar, Gerhard; Woods, Mark; Gimkiewicz, Christiane; Labrosse, Frédéric; Medina, Alberto; Tyler, Laurence; Barnes, David P.; Fritz, Gerald; Kapellos, Konstantinos
2012-01-01
Mobile systems exploring Planetary surfaces in future will require more autonomy than today. The EU FP7-SPACE Project ProViScout (2010-2012) establishes the building blocks of such autonomous exploration systems in terms of robotics vision by a decision-based combination of navigation and scientific target selection, and integrates them into a framework ready for and exposed to field demonstration. The PRoViScout on-board system consists of mission management components such as an Executive, a Mars Mission On-Board Planner and Scheduler, a Science Assessment Module, and Navigation & Vision Processing modules. The platform hardware consists of the rover with the sensors and pointing devices. We report on the major building blocks and their functions & interfaces, emphasizing on the computer vision parts such as image acquisition (using a novel zoomed 3D-Time-of-Flight & RGB camera), mapping from 3D-TOF data, panoramic image & stereo reconstruction, hazard and slope maps, visual odometry and the recognition of potential scientifically interesting targets.
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
MOBLAB: a mobile laboratory for testing real-time vision-based systems in path monitoring
NASA Astrophysics Data System (ADS)
Cumani, Aldo; Denasi, Sandra; Grattoni, Paolo; Guiducci, Antonio; Pettiti, Giuseppe; Quaglia, Giorgio
1995-01-01
In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.
The Light Plane Calibration Method of the Laser Welding Vision Monitoring System
NASA Astrophysics Data System (ADS)
Wang, B. G.; Wu, M. H.; Jia, W. P.
2018-03-01
According to the aerospace and automobile industry, the sheet steels are the very important parts. In the recent years, laser welding technique had been used to weld the sheet steel part. The seam width between the two parts is usually less than 0.1mm. Because the error of the fixture fixed can’t be eliminated, the welding parts quality can be greatly affected. In order to improve the welding quality, the line structured light is employed in the vision monitoring system to plan the welding path before welding. In order to improve the weld precision, the vision system is located on Z axis of the computer numerical control (CNC) tool. The planar pattern is placed on the X-Y plane of the CNC tool, and the structured light is projected on the planar pattern. The vision system stay at three different positions along the Z axis of the CNC tool, and the camera shoot the image of the planar pattern at every position. Using the calculated the sub-pixel center line of the structure light, the world coordinate of the center light line can be calculated. Thus, the structured light plane can be calculated by fitting the structured light line. Experiment result shows the effective of the proposed method.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
NASA Astrophysics Data System (ADS)
Helbing, D.; Balietti, S.; Bishop, S.; Lukowicz, P.
2011-05-01
This contribution reflects on the comments of Peter Allen [1], Bikas K. Chakrabarti [2], Péter Érdi [3], Juval Portugali [4], Sorin Solomon [5], and Stefan Thurner [6] on three White Papers (WP) of the EU Support Action Visioneer (www.visioneer.ethz.ch). These White Papers are entitled "From Social Data Mining to Forecasting Socio-Economic Crises" (WP 1) [7], "From Social Simulation to Integrative System Design" (WP 2) [8], and "How to Create an Innovation Accelerator" (WP 3) [9]. In our reflections, the need and feasibility of a "Knowledge Accelerator" is further substantiated by fundamental considerations and recent events around the globe. newpara The Visioneer White Papers propose research to be carried out that will improve our understanding of complex techno-socio-economic systems and their interaction with the environment. Thereby, they aim to stimulate multi-disciplinary collaborations between ICT, the social sciences, and complexity science. Moreover, they suggest combining the potential of massive real-time data, theoretical models, large-scale computer simulations and participatory online platforms. By doing so, it would become possible to explore various futures and to expand the limits of human imagination when it comes to the assessment of the often counter-intuitive behavior of these complex techno-socio-economic-environmental systems. In this contribution, we also highlight the importance of a pluralistic modeling approach and, in particular, the need for a fruitful interaction between quantitative and qualitative research approaches. newpara In an appendix we briefly summarize the concept of the FuturICT flagship project, which will build on and go beyond the proposals made by the Visioneer White Papers. EU flagships are ambitious multi-disciplinary high-risk projects with a duration of at least 10 years amounting to an envisaged overall budget of 1 billion EUR [10]. The goal of the FuturICT flagship initiative is to understand and manage complex, global, socially interactive systems, with a focus on sustainability and resilience.
Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information
Fotiadis, Efstathios P.; Garzón, Mario; Barrientos, Antonio
2013-01-01
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method. PMID:24008280
Human detection from a mobile robot using fusion of laser and vision information.
Fotiadis, Efstathios P; Garzón, Mario; Barrientos, Antonio
2013-09-04
This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
Machine vision system for measuring conifer seedling morphology
NASA Astrophysics Data System (ADS)
Rigney, Michael P.; Kranzler, Glenn A.
1995-01-01
A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.
76 FR 42684 - Mid-Atlantic Fishery Management Council (MAFMC); Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... Fishery Management Council (MAFMC); Public Meeting AGENCY: National Marine Fisheries Service (NMFS.... SUMMARY: The Mid-Atlantic Fishery Management Council Staff will hold a meeting of the Visioning Project Advisory Panel to discuss communications strategies and data gathering tools for the Visioning Project...
2008-09-01
teleophthalmology system as used by three federal healthcare agencies for detecting proliferative diabetic retinopathy . Telemedicine and e-Health. 2005;11: 641-651...a telemedicine system for comprehensive diabetes management andassessment of diabetic retinopathy that provides increased access for diabetic ...CDMP developed under this collaborative effort. 15. SUBJECT TERMS Joslin Vision Network, telemedicine, diabetes mellitus, diabetic retinopathy
Automated measurement of human body shape and curvature using computer vision
NASA Astrophysics Data System (ADS)
Pearson, Jeremy D.; Hobson, Clifford A.; Dangerfield, Peter H.
1993-06-01
A system to measure the surface shape of the human body has been constructed. The system uses a fringe pattern generated by projection of multi-stripe structured light. The optical methodology used is fully described and the algorithms used to process acquired digital images are outlined. The system has been applied to the measurement of the shape of the human back in scoliosis.
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
Flight Simulator Evaluation of Display Media Devices for Synthetic Vision Concepts
NASA Technical Reports Server (NTRS)
Arthur, J. J., III; Williams, Steven P.; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2004-01-01
The Synthetic Vision Systems (SVS) Project of the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSP) is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft. To accomplish these safety and capacity improvements, the SVS concept is designed to provide a clear view of the world around the aircraft through the display of computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information. Display media devices with which to implement SVS technology that have been evaluated so far within the Project include fixed field of view head up displays and head down Primary Flight Displays with pilot-selectable field of view. A simulation experiment was conducted comparing these display devices to a fixed field of view, unlimited field of regard, full color Helmet-Mounted Display system. Subject pilots flew a visual circling maneuver in IMC at a terrain-challenged airport. The data collected for this experiment is compared to past SVS research studies.
Machine vision 1992-1996: technology program to promote research and its utilization in industry
NASA Astrophysics Data System (ADS)
Soini, Antti J.
1994-10-01
Machine vision technology has got a strong interest in Finnish research organizations, which is resulting in many innovative products to industry. Despite this end users were very skeptical towards machine vision and its robustness for harsh industrial environments. Therefore Technology Development Centre, TEKES, who funds technology related research and development projects in universities and individual companies, decided to start a national technology program, Machine Vision 1992 - 1996. Led by industry the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is in nationally important, demanding applications. The program will create new industry and business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers are working on our forty different projects. The key themes that we have are process control, robot vision and quality control.
A Clear Vision for Equity and Opportunity.
ERIC Educational Resources Information Center
Gould, Marge Christensen; Gould, Herman
2003-01-01
Describes undetected and uncorrected vision problems for children in poverty associated with juvenile delinquency and poor academic performance. Discusses success of a project offering vision screening and free glasses for at-risk students in Tucson, Arizona. (PKP)
Latency Requirements for Head-Worn Display S/EVS Applications
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Trey Arthur, J. J., III; Williams, Steven P.
2004-01-01
NASA s Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas flight control, flight simulation, and virtual reality are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.
Latency requirements for head-worn display S/EVS applications
NASA Astrophysics Data System (ADS)
Bailey, Randall E.; Arthur, Jarvis J., III; Williams, Steven P.
2004-08-01
NASA's Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas - flight control, flight simulation, and virtual reality - are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.
Overview of the Small Aircraft Transportation System Project Four Enabling Operating Capabilities
NASA Technical Reports Server (NTRS)
Viken, Sally A.; Brooks, Frederick M.; Johnson, Sally C.
2005-01-01
It has become evident that our commercial air transportation system is reaching its peak in terms of capacity, with numerous delays in the system and the demand still steadily increasing. NASA, FAA, and the National Consortium for Aviation Mobility (NCAM) have partnered to aid in increasing the mobility throughout the United States through the Small Aircraft Transportation System (SATS) project. The SATS project has been a five-year effort to provide the technical and economic basis for further national investment and policy decisions to support a small aircraft transportation system. The SATS vision is to enable people and goods to have the convenience of on-demand point-to-point travel, anywhere, anytime for both personal and business travel. This vision can be obtained by expanding near all-weather access to more than 3,400 small community airports that are currently under-utilized throughout the United States. SATS has focused its efforts on four key operating capabilities that have addressed new emerging technologies, procedures, and concepts to pave the way for small aircraft to operate in nearly all weather conditions at virtually any runway in the United States. These four key operating capabilities are: Higher Volume Operations at Non-Towered/Non-Radar Airports, En Route Procedures and Systems for Integrated Fleet Operations, Lower Landing Minimums at Minimally Equipped Landing Facilities, and Increased Single Pilot Performance. The SATS project culminated with the 2005 SATS Public Demonstration in Danville, Virginia on June 5th-7th, by showcasing the accomplishments achieved throughout the project and demonstrating that a small aircraft transportation system could be viable. The technologies, procedures, and concepts were successfully demonstrated to show that they were safe, effective, and affordable for small aircraft in near all weather conditions. The focus of this paper is to provide an overview of the technical and operational feasibility of the four operating capabilities, and explain how they can enable a small aircraft transportation system.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...
Alternatives to Pyrotechnic Distress Signals; Laboratory and Field Studies
2015-03-01
using night vision imaging systems (NVIS) with “minus-blue” filtering,” the project recommends additional research and testing leading to the inclusion...18 5.2.3 Background Images ...Example of image capture from radiant imaging colorimeter. ....................................................... 16 Figure 10. Laboratory setup
Pavement Distress Evaluation Using 3D Depth Information from Stereo Vision
DOT National Transportation Integrated Search
2012-07-01
The focus of the current project funded by MIOH-UTC for the period 9/1/2010-8/31/2011 is to : enhance our earlier effort in providing a more robust image processing based pavement distress : detection and classification system. During the last few de...
A Look at the Condition of Education in Massachusetts
ERIC Educational Resources Information Center
d'Entremont, Chad
2014-01-01
Leaders engaged in Massachusetts' public higher education system--including at community colleges, state universities, and UMass--have demonstrated their strong commitment to improvement in recent years. The state Department of Higher Education's Vision Project is focused on reforms necessary to "produce the best educated citizenry and…
NASA Technical Reports Server (NTRS)
Downward, James G.
1992-01-01
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.
Low-Latency Embedded Vision Processor (LLEVS)
2016-03-01
26 3.2.3 Task 3 Projected Performance Analysis of FPGA- based Vision Processor ........... 31 3.2.3.1 Algorithms Latency Analysis ...Programmable Gate Array Custom Hardware for Real- Time Multiresolution Analysis . ............................................... 35...conduct data analysis for performance projections. The data acquired through measurements , simulation and estimation provide the requisite platform for
76 FR 12943 - Mid-Atlantic Fishery Management Council; Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-09
... Strategic Planning Project. The roadmap will detail how the Council solicits stakeholder input and then incorporates that input into a vision and strategic plan that will guide Council Actions in the future. Any briefing materials will be posted to the Council's Visioning and Strategic Planning Project Web site: http...
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
NASA Technical Reports Server (NTRS)
Studor, George
2007-01-01
A viewgraph presentation on lessons learned from NASA Johnson Space Center's micro-wireless instrumentation is shown. The topics include: 1) Background, Rationale and Vision; 2) NASA JSC/Structural Engineering Approach & History; 3) Orbiter Wing Leading Edge Impact Detection System; 4) WLEIDS Confidence and Micro-WIS Lessons Learned; and 5) Current Projects and Recommendations.
Skills for a Changing World: National Perspectives and the Global Movement
ERIC Educational Resources Information Center
Care, Esther; Kim, Helyn; Anderson, Kate; Gustafsson-Wright, Emily
2017-01-01
The Skills for a Changing World project presents evidence of a movement of education systems globally toward a more explicit focus on a broad range of skills that our 21st century society needs and demands. This movement can be seen in the vision and mission statements of education systems as well as through their curricula. Although clearly…
Development Of Autonomous Systems
NASA Astrophysics Data System (ADS)
Kanade, Takeo
1989-03-01
In the last several years at the Robotics Institute of Carnegie Mellon University, we have been working on two projects for developing autonomous systems: Nablab for Autonomous Land Vehicle and Ambler for Mars Rover. These two systems are for different purposes: the Navlab is a four-wheeled vehicle (van) for road and open terrain navigation, and the Ambler is a six-legged locomotor for Mars exploration. The two projects, however, share many common aspects. Both are large-scale integrated systems for navigation. In addition to the development of individual components (eg., construction and control of the vehicle, vision and perception, and planning), integration of those component technologies into a system by means of an appropriate architecture is a major issue.
Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces
NASA Technical Reports Server (NTRS)
Altschuler, M. D.; Altschuler, B. R.; Taboada, J.
1981-01-01
It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.
Synthetic Vision Displays for Planetary and Lunar Lander Vehicles
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Shelton, Kevin J.; Kramer, Lynda J.; Bailey, Randall E.; Norman, Robert M.
2008-01-01
Aviation research has demonstrated that Synthetic Vision (SV) technology can substantially enhance situation awareness, reduce pilot workload, improve aviation safety, and promote flight path control precision. SV, and related flight deck technologies are currently being extended for application in planetary exploration vehicles. SV, in particular, holds significant potential for many planetary missions since the SV presentation provides a computer-generated view for the flight crew of the terrain and other significant environmental characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts, not available from other vision technologies. In addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. The paper accompanies a conference demonstration that introduced a prototype NASA Synthetic Vision system for lunar lander spacecraft. The paper will describe technical challenges and potential solutions to SV applications for the lunar landing mission, including the requirements for high-resolution lunar terrain maps, accurate positioning and orientation, and lunar cockpit display concepts to support projected mission challenges.
Suzuki, Daichi G; Murakami, Yasunori; Yamazaki, Yuji; Wada, Hiroshi
2015-01-01
Image-forming vision is crucial to animals for recognizing objects in their environment. In vertebrates, this type of vision is achieved with paired camera eyes and topographic projection of the optic nerve. Topographic projection is established by an orthogonal gradient of axon guidance molecules, such as Ephs. To explore the evolution of image-forming vision in vertebrates, lampreys, which belong to the basal lineage of vertebrates, are key animals because they show unique "dual visual development." In the embryonic and pre-ammocoete larval stage (the "primary" phase), photoreceptive "ocellus-like" eyes develop, but there is no retinotectal optic nerve projection. In the late ammocoete larval stage (the "secondary" phase), the eyes grow and form into camera eyes, and retinotectal projection is newly formed. After metamorphosis, this retinotectal projection in adult lampreys is topographic, similar to that of gnathostomes. In this study, we explored the involvement of Ephs in lamprey "dual visual development" and establishment of the image-form vision. We found that gnathostome-like orthogonal gradient expression was present in the retina during the "secondary" phase; i.e., EphB showed a gradient of expression along the dorsoventral axis, while EphC was expressed along the anteroposterior axis. However, no orthogonal gradient expression was observed during the "primary" phase. These observations suggest that Ephs are likely recruited de novo for the guidance of topographical "second" optic nerve projection. Transformations during lamprey "dual visual development" may represent "recapitulation" from a protochordate-like ancestor to a gnathostome-like vertebrate ancestor. © 2015 Wiley Periodicals, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...
DOT National Transportation Integrated Search
2005-12-01
This volume provides an overview of the six studies that compose Phase II of the Enhanced Night Visibility project and the experimental plan for its third and final portion, Phase III. The Phase II studies evaluated up to 12 vision enhancement system...
2020 Vision Project Summary: FY99
DOE Office of Scientific and Technical Information (OSTI.GOV)
K.W. Gordon; K.P. Scott
2000-01-01
During the 1998-99 school year, students from participating schools completed and submitted a variety of scenarios describing potential world and regional conditions in the year 2020 and their possible effect on U.S. national security. This report summarizes the student's views and describes trends observed over the course of the 2020 Vision project's four years.
VISIONS for Greater Employment Opportunities. Final Report.
ERIC Educational Resources Information Center
Orangeburg-Calhoun Technical Coll., Orangeburg, SC.
The VISIONS project, a workplace literacy program held in two manufacturing plants and a regional medical center, was conducted during an 18-month period from July 1, 1993 to December 31, 1994. During the project, staff were hired and trained, task analyses and orientation sessions were held, and tests and curricula were developed. Employees were…
Vision for perception and vision for action in the primate brain.
Goodale, M A
1998-01-01
Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer to the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on perceptual representations of the world. The two streams of visual processing that have been identified in the primate cerebral cortex are a reflection of these two functions of vision. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the production of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations. Both streams process information about the structure of objects and about their spatial locations--and both are subject to the modulatory influences of attention. Each stream, however, uses visual information in different ways. Transformations carried out in the ventral stream permit the formation of perceptual representations that embody the enduring characteristics of objects and their relations; those carried out in the dorsal stream which utilize moment-to-moment information about objects within egocentric frames of reference, mediate the control of skilled actions. Both streams work together in the production of goal-directed behaviour.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Potato Operation: automatic detection of potato diseases
NASA Astrophysics Data System (ADS)
Lefebvre, Marc; Zimmerman, Thierry; Baur, Charles; Guegerli, Paul; Pun, Thierry
1995-01-01
The Potato Operation is a collaborative, multidisciplinary project in the domain of destructive testing of agricultural products. It aims at automatizing pulp sampling of potatoes in order to detect possible viral diseases. Such viruses can decrease fields productivity by a factor of up to ten. A machine, composed of three conveyor belts, a vision system, a robotic arm and controlled by a PC has been built. Potatoes are brought one by one from a bulk to the vision system, where they are seized by a rotating holding device. The sprouts, where the viral activity is maximum, are then detected by an active vision process operating on multiple views. The 3D coordinates of the sampling point are communicated to the robot arm holding a drill. Some flesh is then sampled by the drill, then deposited into an Elisa plate. After sampling, the robot arm washes the drill in order to prevent any contamination. The PC computer simultaneously controls these processes, the conveying of the potatoes, the vision algorithms and the sampling procedure. The master process, that is the vision procedure, makes use of three methods to achieve the sprouts detection. A profile analysis first locates the sprouts as protuberances. Two frontal analyses, respectively based on fluorescence and local variance, confirm the previous detection and provide the 3D coordinate of the sampling zone. The other two processes work by interruption of the master process.
ERIC Educational Resources Information Center
Hinckley, June
2000-01-01
Discusses changes in technology, information, and people and the impact on music programs. The Vision 2020 project focuses on the future of music education. Addresses the events that created Vision 2020. Includes "The Housewright Declaration," a summarization of agreements from the Housewright Symposium on the Future of Music Education. (CMK)
Center for Neural Engineering: applications of pulse-coupled neural networks
NASA Astrophysics Data System (ADS)
Malkani, Mohan; Bodruzzaman, Mohammad; Johnson, John L.; Davis, Joel
1999-03-01
Pulsed-Coupled Neural Network (PCNN) is an oscillatory model neural network where grouping of cells and grouping among the groups that form the output time series (number of cells that fires in each input presentation also called `icon'). This is based on the synchronicity of oscillations. Recent work by Johnson and others demonstrated the functional capabilities of networks containing such elements for invariant feature extraction using intensity maps. PCNN thus presents itself as a more biologically plausible model with solid functional potential. This paper will present the summary of several projects and their results where we successfully applied PCNN. In project one, the PCNN was applied for object recognition and classification through a robotic vision system. The features (icons) generated by the PCNN were then fed into a feedforward neural network for classification. In project two, we developed techniques for sensory data fusion. The PCNN algorithm was implemented and tested on a B14 mobile robot. The PCNN-based features were extracted from the images taken from the robot vision system and used in conjunction with the map generated by data fusion of the sonar and wheel encoder data for the navigation of the mobile robot. In our third project, we applied the PCNN for speaker recognition. The spectrogram image of speech signals are fed into the PCNN to produce invariant feature icons which are then fed into a feedforward neural network for speaker identification.
2009-04-09
detecting proliferative diabetic retinopathy . Telemedicine and e-Health. 2005;11: 641-651. MILESTONES AND DELIVERABLES: Completion of data...telemedicine system for comprehensive diabetes management and assessment of diabetic retinopathy that provides increased access for diabetic patients to...CDMP developed under this collaborative effort. 15. SUBJECT TERMS Joslin Vision Network, telemedicine, diabetes mellitus, diabetic retinopathy
NASA Astrophysics Data System (ADS)
Paar, G.
2009-04-01
At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.
Jordan Reforms Public Education to Compete in a Global Economy
ERIC Educational Resources Information Center
Erickson, Paul W.
2009-01-01
The King of Jordan's vision for education is resulting in innovative projects for the country. King Abdullah II wants Jordan to develop its human resources through public education to equip the workforce with skills for the future. From King Abdullah II's vision, the Education Reform for a Knowledge Economy (ERfKE) project implemented by the…
Teaching the Very Recent Past: "Miriam's Vision" and the London Bombings
ERIC Educational Resources Information Center
Kitson, Alison; Thompson, Sarah
2015-01-01
"Miriam's Vision" is an educational project developed by the Miriam Hyman Memorial Trust, an organisation set up in memory of Miriam Hyman, one of the 52 victims of the London bombings of 2005. The project has developed a number of subject-based modules, including history, which are provided free to schools through the website…
ERIC Educational Resources Information Center
Sailor, Wayne; And Others
Intended for teachers of deaf-blind and severely handicapped students as well as for resource or itinerant teachers in the area of vision who have recently begun to serve low functioning students, the manual provides information on vision and on vision assessment. The manual serves three functions. It: (1) prepares teachers for participation in…
NASA Technical Reports Server (NTRS)
Simon, Tom
2009-01-01
To effectively manage a project, the project manager must have a plan, understand the current conditions, and be able to take action to correct the course when challenges arise. Research and design projects face technical, schedule, and budget challenges that make it difficult to utilize project management tools developed for projects based on previously demonstrated technologies. Projects developing new technologies by their inherent nature are trying something new and thus have little to no data to support estimates for schedule and cost, let alone the technical outcome. Projects with a vision for the outcome but little confidence in the exact tasks to accomplish in order to achieve the vision incur cost and schedule penalties when conceptual solutions require unexpected iterations or even a reinvention of the plan. This presentation will share the project management methodology and tools developed through trial and error for a NASA research and design project combining industry, academia, and NASA inhouse work in which Earned Value Management principles were employed but adapted for the reality of the government financial system and the reality of challenging technology development. The priorities of the presented methodology are flexibility, accountability, and simplicity to give the manager tools to help deliver to the customer while not using up valuable time and resources on extensive planning and analysis. This presentation will share the methodology, tools, and work through failed and successful examples from the three years of process evolution.
Samosky, Joseph T; Baillargeon, Emma; Bregman, Russell; Brown, Andrew; Chaya, Amy; Enders, Leah; Nelson, Douglas A; Robinson, Evan; Sukits, Alison L; Weaver, Robert A
2011-01-01
We have developed a prototype of a real-time, interactive projective overlay (IPO) system that creates augmented reality display of a medical procedure directly on the surface of a full-body mannequin human simulator. These images approximate the appearance of both anatomic structures and instrument activity occurring within the body. The key innovation of the current work is sensing the position and motion of an actual device (such as an endotracheal tube) inserted into the mannequin and using the sensed position to control projected video images portraying the internal appearance of the same devices and relevant anatomic structures. The images are projected in correct registration onto the surface of the simulated body. As an initial practical prototype to test this technique we have developed a system permitting real-time visualization of the intra-airway position of an endotracheal tube during simulated intubation training.
NASA Technical Reports Server (NTRS)
Hoberecht, Mark A.
2010-01-01
NASA s Energy Storage Project is one of many technology development efforts being implemented as part of the Exploration Technology Development Program (ETDP), under the auspices of the Exploration Systems Mission Directorate (ESMD). The Energy Storage Project is a focused technology development effort to advance lithium-ion battery and proton-exchange-membrane fuel cell (PEMFC) technologies to meet the specific power and energy storage needs of NASA Exploration missions. The fuel cell portion of the project has as its focus the development of both primary fuel cell power systems and regenerative fuel cell (RFC) energy storage systems, and is led by the NASA Glenn Research Center (GRC) in partnership with the Johnson Space Center (JSC), the Jet Propulsion Laboratory (JPL), the Kennedy Space Center (KSC), academia, and industrial partners. The development goals are to improve stack electrical performance, reduce system mass and parasitic power requirements, and increase system life and reliability.
Doing the Humanities: The Use of Undergraduate Classroom Humanities Research Projects.
ERIC Educational Resources Information Center
Geib, George W.
"American Visions" is a freshman-level survey course offered by the Department of History as part of Butler University's core curriculum. The course is built around three primary contextual considerations: high culture, popular culture, and community culture. The high culture approach is designed to introduce students to major systems of thought…
Education Sciences, Schooling, and Abjection: Recognizing Difference and the Making of Inequality?
ERIC Educational Resources Information Center
Popkewitz, Thomas
2008-01-01
Schooling in North America and northern Europe embodies salvation themes. The themes are (re)visions of Enlightenments' projects about the cosmopolitan citizen and scientific progress. The emancipatory principles, however, were never merely about freedom and inclusion. A comparative system of reason was inscribed as gestures of hope and fear. The…
Jóhannesson, Ómar I.; Balan, Oana; Unnthorsson, Runar; Moldoveanu, Alin; Kristjánsson, Árni
2016-01-01
The Sound of Vision project involves developing a sensory substitution device that is aimed at creating and conveying a rich auditory representation of the surrounding environment to the visually impaired. However, the feasibility of such an approach is strongly constrained by neural flexibility, possibilities of sensory substitution and adaptation to changed sensory input. We review evidence for such flexibility from various perspectives. We discuss neuroplasticity of the adult brain with an emphasis on functional changes in the visually impaired compared to sighted people. We discuss effects of adaptation on brain activity, in particular short-term and long-term effects of repeated exposure to particular stimuli. We then discuss evidence for sensory substitution such as Sound of Vision involves, while finally discussing evidence for adaptation to changes in the auditory environment. We conclude that sensory substitution enterprises such as Sound of Vision are quite feasible in light of the available evidence, which is encouraging regarding such projects. PMID:27355966
Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads
NASA Technical Reports Server (NTRS)
DiPaolo, Daniel
2003-01-01
The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
Stereoscopic Machine-Vision System Using Projected Circles
NASA Technical Reports Server (NTRS)
Mackey, Jeffrey R.
2010-01-01
A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a template in processing terrain images. During operation on terrain, the images acquired by the left and right cameras are analyzed. The analysis includes (1) computation of the horizontal and vertical dimensions and the aspect ratios of rectangles that bound the circle images and (2) comparison of these aspect ratios with those of the template. Coordinates of distortions of the circles are used to identify and locate objects. If the analysis leads to identification of an object of significant size, then stereoscopicvision algorithms are used to estimate the distance to the object. The time taken in performing this analysis on a single pair of images acquired by the left and right cameras in this system is a fraction of the time taken in processing the many pairs of images acquired in a sweep of the laser stripe across the field of view in the prior system. The results of the analysis include data on sizes and shapes of, and distances and directions to, objects. Coordinates of objects are updated as the vehicle moves so that intelligent decisions regarding speed and direction can be made. The results of the analysis are utilized in a computational decision-making process that generates obstacle-avoidance data and feeds those data to the control system of the robotic vehicle.
Brannon, S Diane; Kemper, Peter; Barry, Theresa
2009-01-01
Better Jobs Better Care was a five-state direct care workforce demonstration designed to change policy and management practices that influence recruitment and retention of direct care workers, problems that continue to challenge providers. One of the projects, the North Carolina Partner Team, developed a unified approach in which skilled nursing, home care, and assisted living providers could be rewarded for meeting standards of workplace excellence. This case study documents the complex adaptive system agents and processes that coalesced to result in legislation recognizing the North Carolina New Organizational Vision Award. We used a holistic, single-case study design. Qualitative data from project work plans and progress reports as well as notes from interviews with key stakeholders and observation of meetings were coded into a simple rubric consisting of characteristics of complex adaptive systems. Key system agents in the state set the stage for the successful multistakeholder coalition. These included leadership by the North Carolina Department of Health and Human Services and a several year effort to develop a unifying vision for workforce development. Grant resources were used to facilitate both content and process work. Structure was allowed to emerge as needed. The coalition's own development is shown to have changed the context from which it was derived. An inclusive and iterative process produced detailed standards and measures for the voluntary recognition process. With effective facilitation, the interests of the multiple stakeholders coalesced into a policy response that encourages practice changes. Implications for managing change-oriented coalitions are discussed.
1992-03-01
construction were completed and data, "’dm blue prints and physical measurements, was entered concurrent with the coding of routines for data retrieval. While...desirable for that view to accurately reflect what a person (or camera) would see if they were to stand at the same point in the physical world. To... physical dimensions. A parallel projection does not perform this scaling and is therefore not suitable to our application. B. GENERAL PERSPECTIVE
NASA Technical Reports Server (NTRS)
1991-01-01
The Center for Space Construction at the University of Colorado at Boulder was established in 1988 as a University Space Engineering Research Center. The mission of the Center is to conduct interdisciplinary engineering research which is critical to the construction of future space structures and systems and to educate students who will have the vision and technical skills to successfully lead future space construction activities. The research activities are currently organized around two central projects: Orbital Construction and Lunar Construction. Summaries of the research projects are included.
2015-08-21
using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home
Glass Vision 3D: Digital Discovery for the Deaf
ERIC Educational Resources Information Center
Parton, Becky Sue
2017-01-01
Glass Vision 3D was a grant-funded project focused on developing and researching a Google Glass app that would allowed young Deaf children to look at the QR code of an object in the classroom and see an augmented reality projection that displays an American Sign Language (ASL) related video. Twenty five objects and videos were prepared and tested…
Gulf States Strategic Vision to Face Iranian Nuclear Project
2015-09-01
STRATEGIC VISION TO FACE IRANIAN NUCLEAR PROJECT by Fawzan A. Alfawzan September 2015 Thesis Advisor: James Russell Second Reader: Anne...nuclear weapons at a high degree. Nuclear capabilities provided Iran with uranium enrichments abilities and nuclear weapons to enable the country to...IN SECURITY STUDIES (STRATEGIC STUDIES) from the NAVAL POSTGRADUATE SCHOOL September 2015 Approved by: James Russell Thesis
Negotiating plausibility: intervening in the future of nanotechnology.
Selin, Cynthia
2011-12-01
The national-level scenarios project NanoFutures focuses on the social, political, economic, and ethical implications of nanotechnology, and is initiated by the Center for Nanotechnology in Society at Arizona State University (CNS-ASU). The project involves novel methods for the development of plausible visions of nanotechnology-enabled futures, elucidates public preferences for various alternatives, and, using such preferences, helps refine future visions for research and outreach. In doing so, the NanoFutures project aims to address a central question: how to deliberate the social implications of an emergent technology whose outcomes are not known. The solution pursued by the NanoFutures project is twofold. First, NanoFutures limits speculation about the technology to plausible visions. This ambition introduces a host of concerns about the limits of prediction, the nature of plausibility, and how to establish plausibility. Second, it subjects these visions to democratic assessment by a range of stakeholders, thus raising methodological questions as to who are relevant stakeholders and how to activate different communities so as to engage the far future. This article makes the dilemmas posed by decisions about such methodological issues transparent and therefore articulates the role of plausibility in anticipatory governance.
Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes.
Pinto-Lopera, Jesús Emilio; S T Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo
2016-09-15
Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch's t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system.
Flight instruments and helmet-mounted SWIR imaging systems
NASA Astrophysics Data System (ADS)
Robinson, Tim; Green, John; Jacobson, Mickey; Grabski, Greg
2011-06-01
Night vision technology has experienced significant advances in the last two decades. Night vision goggles (NVGs) based on gallium arsenide (GaAs) continues to raise the bar for alternative technologies. Resolution, gain, sensitivity have all improved; the image quality through these devices is nothing less than incredible. Panoramic NVGs and enhanced NVGs are examples of recent advances that increase the warfighter capabilities. Even with these advances, alternative night vision devices such as solid-state indium gallium arsenide (InGaAs) focal plane arrays are under development for helmet-mounted imaging systems. The InGaAs imaging system offers advantages over the existing NVGs. Two key advantages are; (1) the new system produces digital image data, and (2) the new system is sensitive to energy in the shortwave infrared (SWIR) spectrum. While it is tempting to contrast the performance of these digital systems to the existing NVGs, the advantage of different spectral detection bands leads to the conclusion that the technologies are less competitive and more synergistic. It is likely, by the end of the decade, pilots within a cockpit will use multi-band devices. As such, flight decks will need to be compatible with both NVGs and SWIR imaging systems. Insertion of NVGs in aircraft during the late 70's and early 80's resulted in many "lessons learned" concerning instrument compatibility with NVGs. These "lessons learned" ultimately resulted in specifications such as MIL-L-85762A and MIL-STD-3009. These specifications are now used throughout industry to produce NVG-compatible illuminated instruments and displays for both military and civilian applications. Inserting a SWIR imaging device in a cockpit will require similar consideration. A project evaluating flight deck instrument compatibility with SWIR devices is currently ongoing; aspects of this evaluation are described in this paper. This project is sponsored by the Air Force Research Laboratory (AFRL).
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
Beyond the computer-based patient record: re-engineering with a vision.
Genn, B; Geukers, L
1995-01-01
In order to achieve real benefit from the potential offered by a Computer-Based Patient Record, the capabilities of the technology must be applied along with true re-engineering of healthcare delivery processes. University Hospital recognizes this and is using systems implementation projects, such as the catalyst, for transforming the way we care for our patients. Integration is fundamental to the success of these initiatives and this must be explicitly planned against an organized systems architecture whose standards are market-driven. University Hospital also recognizes that Community Health Information Networks will offer improved quality of patient care at a reduced overall cost to the system. All of these implementation factors are considered up front as the hospital makes its initial decisions on to how to computerize its patient records. This improves our chances for success and will provide a consistent vision to guide the hospital's development of new and better patient care.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Ellis, Kyle E.; Arthur, Jarvis J.; Nicholas, Stephanie N.; Kiggins, Daniel
2017-01-01
A Commercial Aviation Safety Team (CAST) study of 18 worldwide loss-of-control accidents and incidents determined that the lack of external visual references was associated with a flight crew's loss of attitude awareness or energy state awareness in 17 of these events. Therefore, CAST recommended development and implementation of virtual day-Visual Meteorological Condition (VMC) display systems, such as synthetic vision systems, which can promote flight crew attitude awareness similar to a day-VMC environment. This paper describes the results of a high-fidelity, large transport aircraft simulation experiment that evaluated virtual day-VMC displays and a "background attitude indicator" concept as an aid to pilots in recovery from unusual attitudes. Twelve commercial airline pilots performed multiple unusual attitude recoveries and both quantitative and qualitative dependent measures were collected. Experimental results and future research directions under this CAST initiative and the NASA "Technologies for Airplane State Awareness" research project are described.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-17
... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...
Fixed Wing Project: Technologies for Advanced Air Transports
NASA Technical Reports Server (NTRS)
Del Rosario, Ruben; Koudelka, John M.; Wahls, Richard A.; Madavan, Nateri
2014-01-01
The NASA Fundamental Aeronautics Fixed Wing (FW) Project addresses the comprehensive challenge of enabling revolutionary energy efficiency improvements in subsonic transport aircraft combined with dramatic reductions in harmful emissions and perceived noise to facilitate sustained growth of the air transportation system. Advanced technologies and the development of unconventional aircraft systems offer the potential to achieve these improvements. Multidisciplinary advances are required in aerodynamic efficiency to reduce drag, structural efficiency to reduce aircraft empty weight, and propulsive and thermal efficiency to reduce thrust-specific energy consumption (TSEC) for overall system benefit. Additionally, advances are required to reduce perceived noise without adversely affecting drag, weight, or TSEC, and to reduce harmful emissions without adversely affecting energy efficiency or noise.The presentation will highlight the Fixed Wing project vision of revolutionary systems and technologies needed to achieve these challenging goals. Specifically, the primary focus of the FW Project is on the N+3 generation; that is, vehicles that are three generations beyond the current state of the art, requiring mature technology solutions in the 2025-30 timeframe.
Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867
Rooney, Kevin K; Condia, Robert J; Loschky, Lester C
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).
Cryogenic Fluid Management Technologies for Advanced Green Propulsion Systems
NASA Technical Reports Server (NTRS)
Motil, Susan M.; Meyer, Michael L.; Tucker, Stephen P.
2007-01-01
In support of the Exploration Vision for returning to the Moon and beyond, NASA and its partners are developing and testing critical cryogenic fluid propellant technologies that will meet the need for high performance propellants on long-term missions. Reliable knowledge of low-gravity cryogenic fluid management behavior is lacking and yet is critical in the areas of tank thermal and pressure control, fluid acquisition, mass gauging, and fluid transfer. Such knowledge can significantly reduce or even eliminate tank fluid boil-off losses for long term missions, reduce propellant launch mass and required on-orbit margins, and simplify vehicle operations. The Propulsion and Cryogenic Advanced Development (PCAD) Project is performing experimental and analytical evaluation of several areas within Cryogenic Fluid Management (CFM) to enable NASA's Exploration Vision. This paper discusses the status of the PCAD CFM technology focus areas relative to the anticipated CFM requirements to enable execution of the Vision for Space Exploration.
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
NASA Astrophysics Data System (ADS)
Song, Weitao; Weng, Dongdong; Feng, Dan; Li, Yuqian; Liu, Yue; Wang, Yongtian
2015-05-01
As one of popular immersive Virtual Reality (VR) systems, stereoscopic cave automatic virtual environment (CAVE) system is typically consisted of 4 to 6 3m-by-3m sides of a room made of rear-projected screens. While many endeavors have been made to reduce the size of the projection-based CAVE system, the issue of asthenopia caused by lengthy exposure to stereoscopic images in such CAVE with a close viewing distance was seldom tangled. In this paper, we propose a light-weighted approach which utilizes a convex eyepiece to reduce visual discomfort induced by stereoscopic vision. An empirical experiment was conducted to examine the feasibility of convex eyepiece in a large depth of field (DOF) at close viewing distance both objectively and subjectively. The result shows the positive effects of convex eyepiece on the relief of eyestrain.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864
NativeView: A Geospatial Curriculum for Native Nation Building
NASA Astrophysics Data System (ADS)
Rattling Leaf, J.
2007-12-01
In the spirit of collaboration and reciprocity, James Rattling Leaf of Sinte Gleska University on the Rosebud Reservation of South Dakota will present recent developments, experiences, insights and a vision for education in Indian Country. As a thirty-year young institution, Sinte Gleska University is founded by a strong vision of ancestral leadership and the values of the Lakota Way of Life. Sinte Gleska University (SGU) has initiated the development of a Geospatial Education Curriculum project. NativeView: A Geospatial Curriculum for Native Nation Building is a two-year project that entails a disciplined approach towards the development of a relevant Geospatial academic curriculum. This project is designed to meet the educational and land management needs of the Rosebud Lakota Tribe through the utilization of Geographic Information Systems (GIS), Remote Sensing (RS) and Global Positioning Systems (GPS). In conjunction with the strategy and progress of this academic project, a formal presentation and demonstration of the SGU based Geospatial software RezMapper software will exemplify an innovative example of state of the art information technology. RezMapper is an interactive CD software package focused toward the 21 Lakota communities on the Rosebud Reservation that utilizes an ingenious concept of multimedia mapping and state of the art data compression and presentation. This ongoing development utilizes geographic data, imagery from space, historical aerial photography and cultural features such as historic Lakota documents, language, song, video and historical photographs in a multimedia fashion. As a tangible product, RezMapper will be a project deliverable tool for use in the classroom and to a broad range of learners.
A Structured Light Sensor System for Tree Inventory
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong; Zemek, Michael C.
2000-01-01
Tree Inventory is referred to measurement and estimation of marketable wood volume in a piece of land or forest for purposes such as investment or for loan applications. Exist techniques rely on trained surveyor conducting measurements manually using simple optical or mechanical devices, and hence are time consuming subjective and error prone. The advance of computer vision techniques makes it possible to conduct automatic measurements that are more efficient, objective and reliable. This paper describes 3D measurements of tree diameters using a uniquely designed ensemble of two line laser emitters rigidly mounted on a video camera. The proposed laser camera system relies on a fixed distance between two parallel laser planes and projections of laser lines to calculate tree diameters. Performance of the laser camera system is further enhanced by fusion of information induced from structured lighting and that contained in video images. Comparison will be made between the laser camera sensor system and a stereo vision system previously developed for measurements of tree diameters.
... magnifying reading glasses or loupes for seeing the computer screen , sheet music, or for sewing telescopic glasses ... for the Blind services. The Low Vision Pilot Project The American Foundation for the Blind (AFB) has ...
A Vision in Aeronautics: The K-12 Wind Tunnel Project
NASA Technical Reports Server (NTRS)
1997-01-01
A Vision in Aeronautics, a project within the NASA Lewis Research Center's Information Infrastructure Technologies and Applications (IITA) K-12 Program, employs small-scale, subsonic wind tunnels to inspire students to explore the world of aeronautics and computers. Recently, two educational K-12 wind tunnels were built in the Cleveland area. During the 1995-1996 school year, preliminary testing occurred in both tunnels.
Shared Perception for Autonomous Systems
2015-08-24
minivan or sport utility vehicle (SUV) may be around 1.8 meters tall. Next, a height distribution of ~ 1.5, 0.3 was used to project the car detections...Vision, vol. 60, no. 2, 2004, pp. 91–110. 4. N. Snavely, S.M. Seitz, and R. Szeliski, “Photo Tourism : Exploring Photo Collections in 3D,” Proceedings of
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
NASA Astrophysics Data System (ADS)
Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella
In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.
Sensor Webs as Virtual Data Systems for Earth Science
NASA Astrophysics Data System (ADS)
Moe, K. L.; Sherwood, R.
2008-05-01
The NASA Earth Science Technology Office established a 3-year Advanced Information Systems Technology (AIST) development program in late 2006 to explore the technical challenges associated with integrating sensors, sensor networks, data assimilation and modeling components into virtual data systems called "sensor webs". The AIST sensor web program was initiated in response to a renewed emphasis on the sensor web concepts. In 2004, NASA proposed an Earth science vision for a more robust Earth observing system, coupled with remote sensing data analysis tools and advances in Earth system models. The AIST program is conducting the research and developing components to explore the technology infrastructure that will enable the visionary goals. A working statement for a NASA Earth science sensor web vision is the following: On-demand sensing of a broad array of environmental and ecological phenomena across a wide range of spatial and temporal scales, from a heterogeneous suite of sensors both in-situ and in orbit. Sensor webs will be dynamically organized to collect data, extract information from it, accept input from other sensor / forecast / tasking systems, interact with the environment based on what they detect or are tasked to perform, and communicate observations and results in real time. The focus on sensor webs is to develop the technology and prototypes to demonstrate the evolving sensor web capabilities. There are 35 AIST projects ranging from 1 to 3 years in duration addressing various aspects of sensor webs involving space sensors such as Earth Observing-1, in situ sensor networks such as the southern California earthquake network, and various modeling and forecasting systems. Some of these projects build on proof-of-concept demonstrations of sensor web capabilities like the EO-1 rapid fire response initially implemented in 2003. Other projects simulate future sensor web configurations to evaluate the effectiveness of sensor-model interactions for producing improved science predictions. Still other projects are maturing technology to support autonomous operations, communications and system interoperability. This paper will highlight lessons learned by various projects during the first half of the AIST program. Several sensor web demonstrations have been implemented and resulting experience with evolving standards, such as the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) among others, will be featured. The role of sensor webs in support of the intergovernmental Group on Earth Observations' Global Earth Observation System of Systems (GEOSS) will also be discussed. The GEOSS vision is a distributed system of systems that builds on international components to supply observing and processing systems that are, in the whole, comprehensive, coordinated and sustained. Sensor web prototypes are under development to demonstrate how remote sensing satellite data, in situ sensor networks and decision support systems collaborate in applications of interest to GEO, such as flood monitoring. Furthermore, the international Committee on Earth Observation Satellites (CEOS) has stepped up to the challenge to provide the space-based systems component for GEOSS. CEOS has proposed "virtual constellations" to address emerging data gaps in environmental monitoring, avoid overlap among observing systems, and make maximum use of existing space and ground assets. Exploratory applications that support the objectives of virtual constellations will also be discussed as a future role for sensor webs.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
2020 Vision: Envisioning a New Generation of STEM Learning Research
ERIC Educational Resources Information Center
Dierking, Lynn D.; Falk, John H.
2016-01-01
In this issue, we have compiled six original papers, outcomes from the U.S. National Science Foundation (US-NSF)-funded REESE (Research and Evaluation on Education in Science and Engineering) 2020 Vision: The Next Generation of STEM Learning Research project. The purpose of 2020 Vision was to re-envision the questions and frameworks guiding STEM…
Towards a Framework for Modeling Space Systems Architectures
NASA Technical Reports Server (NTRS)
Shames, Peter; Skipper, Joseph
2006-01-01
Topics covered include: 1) Statement of the problem: a) Space system architecture is complex; b) Existing terrestrial approaches must be adapted for space; c) Need a common architecture methodology and information model; d) Need appropriate set of viewpoints. 2) Requirements on a space systems model. 3) Model Based Engineering and Design (MBED) project: a) Evaluated different methods; b) Adapted and utilized RASDS & RM-ODP; c) Identified useful set of viewpoints; d) Did actual model exchanges among selected subset of tools. 4) Lessons learned & future vision.
Connected and autonomous vehicles 2040 vision.
DOT National Transportation Integrated Search
2014-07-01
The Pennsylvania Department of Transportation (PennDOT) commissioned a one-year project, Connected and Autonomous : Vehicles 2040 Vision, with researchers at Carnegie Mellon University (CMU) to assess the implications of connected and : autonomous ve...
Quasi-eccentricity error modeling and compensation in vision metrology
NASA Astrophysics Data System (ADS)
Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin
2018-04-01
Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.
NASA Astrophysics Data System (ADS)
2011-03-01
WE RECOMMEND Requiem for a Species This book delivers a sober message about climate change Laser Sound System Sound kit is useful for laser demonstrations EasySense VISION Data Harvest produces another easy-to-use data logger UV Flash Kit Useful equipment captures shadows on film The Demon-Haunted World World-famous astronomer attacks pseudoscience in this book Nonsense on Stilts A thought-provoking analysis of hard and soft sciences How to Think about Weird Things This book explores the credibility of astrologers and their ilk WORTH A LOOK Chameleon Nano Flakes Product lacks good instructions and guidelines WEB WATCH Amateur scientists help out researchers with a variety of online projects
Advanced Pathway Guidance Evaluations on a Synthetic Vision Head-Up Display
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prinzel, Lawrence J., III; Arthur, Jarvis J., III; Bailey, Randall E.
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to potentially eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced guidance for commercial and business aircraft. This experiment evaluated the influence of different pathway and guidance display concepts upon pilot situation awareness (SA), mental workload, and flight path tracking performance for Synthetic Vision display concepts using a Head-Up Display (HUD). Two pathway formats (dynamic and minimal tunnel presentations) were evaluated against a baseline condition (no tunnel) during simulated instrument meteorological conditions approaches to Reno-Tahoe International airport. Two guidance cues (tadpole, follow-me aircraft) were also evaluated to assess their influence. Results indicated that the presence of a tunnel on an SVS HUD had no effect on flight path performance but that it did have significant effects on pilot SA and mental workload. The dynamic tunnel concept with the follow-me aircraft guidance symbol produced the lowest workload and provided the highest SA among the tunnel concepts evaluated.
Pathway Design Effects on Synthetic Vision Head-Up Displays
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prinzel, Lawrence J., III; Arthur, Jarvis J., III; Bailey, Randall E.
2004-01-01
NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. This experiment evaluated the influence of different tunnel and guidance concepts upon pilot situation awareness (SA), mental workload, and flight path tracking performance for Synthetic Vision display concepts using a Head-Up Display (HUD). Two tunnel formats (dynamic, minimal) were evaluated against a baseline condition (no tunnel) during simulated IMC approaches to Reno-Tahoe International airport. Two guidance cues (tadpole, follow-me aircraft) were also evaluated to assess their influence on the tunnel formats. Results indicated that the presence of a tunnel on an SVS HUD had no effect on flight path performance but that it did have significant effects on pilot SA and mental workload. The dynamic tunnel concept with the follow-me aircraft guidance symbol produced the lowest workload and provided the highest SA among the tunnel concepts evaluated.
Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian
2017-08-01
Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.
ERIC Educational Resources Information Center
Freiler, Christa; Hurley, Stephen; Canuel, Ron; McGahey, Bob; Froese-Germain, Bernie; Riel, Rick
2012-01-01
"Teaching the Way We Aspire to Teach--Now and in the Future" is a collaborative research project between the Canadian Education Association (CEA) and the Canadian Teachers' Federation (CTF). The project grew out of a shared interest in exploring with teachers their experiences and visions of teaching the way in which they aspire--that…
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
NASA Technical Reports Server (NTRS)
Shelton, Kevin J.; Kramer, Lynda J.; Ellis,Kyle K.; Rehfeld, Sherri A.
2012-01-01
The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.
Initial test of MITA/DIMM with an operational CBP system
NASA Astrophysics Data System (ADS)
Baldwin, Kevin; Hanna, Randall; Brown, Andrea; Brown, David; Moyer, Steven; Hixson, Jonathan G.
2018-05-01
The MITA (Motion Imagery Task Analyzer) project was conceived by CBP OA (Customs and Border Protection - Office of Acquisition) and executed by JHU/APL (Johns Hopkins University/Applied Physics Laboratory) and CERDEC NVESD MSD (Communications and Electronics Research Development Engineering Command Night Vision and Electronic Sensors Directorate Modeling and Simulation Division). The intent was to develop an efficient methodology whereby imaging system performance could be quickly and objectively characterized in a field setting. The initial design, development, and testing spanned a period of approximately 18 months with the initial project coming to a conclusion after testing of the MITA system in June 2017 with a fielded CBP system. The NVESD contribution to MITA was thermally heated target resolution boards deployed to support a range close to the sensor and, when possible, at range with the targets of interest. JHU/APL developed a laser DIMM (Differential Image Motion Monitor) system designed to measure the optical turbulence present along the line of sight of the imaging system during the time of image collection. The imagery collected of the target board was processed to calculate the in situ system resolution. This in situ imaging system resolution and the time-correlated turbulence measured by the DIMM system were used in NV-IPM (Night Vision Integrated Performance Model) to calculate the theoretical imaging system performance. Overall, this proves the MITA concept feasible. However, MITA is still in the initial phases of development and requires further verification and validation to ensure accuracy and reliability of both the instrument and the imaging system performance predictions.
DARPA super resolution vision system (SRVS) robust turbulence data collection and analysis
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Thompson, Roger; Tofsted, David; D'Arcy, Sean
2014-05-01
Atmospheric turbulence degrades the range performance of military imaging systems, specifically those intended for long range, ground-to-ground target identification. The recent Defense Advanced Research Projects Agency (DARPA) Super Resolution Vision System (SRVS) program developed novel post-processing system components to mitigate turbulence effects on visible and infrared sensor systems. As part of the program, the US Army RDECOM CERDEC NVESD and the US Army Research Laboratory Computational & Information Sciences Directorate (CISD) collaborated on a field collection and atmospheric characterization of a two-handed weapon identification dataset through a diurnal cycle for a variety of ranges and sensor systems. The robust dataset is useful in developing new models and simulations of turbulence, as well for providing as a standard baseline for comparison of sensor systems in the presence of turbulence degradation and mitigation. In this paper, we describe the field collection and atmospheric characterization and present the robust dataset to the defense, sensing, and security community. In addition, we present an expanded model validation of turbulence degradation using the field collected video sequences.
BBN-Based Portfolio Risk Assessment for NASA Technology R&D Outcome
NASA Technical Reports Server (NTRS)
Geuther, Steven C.; Shih, Ann T.
2016-01-01
The NASA Aeronautics Research Mission Directorate (ARMD) vision falls into six strategic thrusts that are aimed to support the challenges of the Next Generation Air Transportation System (NextGen). In order to achieve the goals of the ARMD vision, the Airspace Operations and Safety Program (AOSP) is committed to developing and delivering new technologies. To meet the dual challenges of constrained resources and timely technology delivery, program portfolio risk assessment is critical for communication and decision-making. This paper describes how Bayesian Belief Network (BBN) is applied to assess the probability of a technology meeting the expected outcome. The network takes into account the different risk factors of technology development and implementation phases. The use of BBNs allows for all technologies of projects in a program portfolio to be separately examined and compared. In addition, the technology interaction effects are modeled through the application of object-oriented BBNs. The paper discusses the development of simplified project risk BBNs and presents various risk results. The results presented include the probability of project risks not meeting success criteria, the risk drivers under uncertainty via sensitivity analysis, and what-if analysis. Finally, the paper shows how program portfolio risk can be assessed using risk results from BBNs of projects in the portfolio.
Summerskill, Stephen; Marshall, Russell; Cook, Sharon; Lenard, James; Richardson, John
2016-03-01
The aim of the study is to understand the nature of blind spots in the vision of drivers of Large Goods Vehicles caused by vehicle design variables such as the driver eye height, and mirror designs. The study was informed by the processing of UK national accident data using cluster analysis to establish if vehicle blind spots contribute to accidents. In order to establish the cause and nature of blind spots six top selling trucks in the UK, with a range of sizes were digitized and imported into the SAMMIE Digital Human Modelling (DHM) system. A novel CAD based vision projection technique, which has been validated in a laboratory study, allowed multiple mirror and window aperture projections to be created, resulting in the identification and quantification of a key blind spot. The identified blind spot was demonstrated to have the potential to be associated with the scenarios that were identified in the accident data. The project led to the revision of UNECE Regulation 46 that defines mirror coverage in the European Union, with new vehicle registrations in Europe being required to meet the amended standard after June of 2015. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.
1996-01-01
The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.
Sociology of Low Expectations: Recalibration as Innovation Work in Biomedicine.
Gardner, John; Samuel, Gabrielle; Williams, Clare
2015-11-01
Social scientists have drawn attention to the role of hype and optimistic visions of the future in providing momentum to biomedical innovation projects by encouraging innovation alliances. In this article, we show how less optimistic, uncertain, and modest visions of the future can also provide innovation projects with momentum. Scholars have highlighted the need for clinicians to carefully manage the expectations of their prospective patients. Using the example of a pioneering clinical team providing deep brain stimulation to children and young people with movement disorders, we show how clinicians confront this requirement by drawing on their professional knowledge and clinical expertise to construct visions of the future with their prospective patients; visions which are personalized, modest, and tainted with uncertainty. We refer to this vision-constructing work as recalibration, and we argue that recalibration enables clinicians to manage the tension between the highly optimistic and hyped visions of the future that surround novel biomedical interventions, and the exigencies of delivering those interventions in a clinical setting. Drawing on work from science and technology studies, we suggest that recalibration enrolls patients in an innovation alliance by creating a shared understanding of how the "effectiveness" of an innovation shall be judged.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring
NASA Technical Reports Server (NTRS)
Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.
2015-01-01
Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.
Alabama Black Belt eye care--optometry giving back.
Sanspree, Mary Jean; Allison, Carol; Goldblatt, Stephanie Hardwick; Pevsner, Diane
2008-12-01
The aim of this study was to describe the process used to meet the vision needs, as well as other health problems related to eye disease, of individuals in the rural Black Belt region of Alabama. This model includes a multidisciplinary collaborative effort that has developed into a replicable vision care delivery system. This study was a descriptive research study. Vision and health evaluations were made available to residents of rural counties with a specific focus on an area in Alabama known as the "Black Belt." The model for the project was designed with input from the collaborative partners who were responsible for each health and vision station. Participants in the Rural Alabama Diabetes and Glaucoma Initiative (RADGI) study involved 1,765 black women, 619 black men, and 315 others. The study included 2,699 participants in 7 counties. The reported ages of the patients ranged from 5 to 97 years, with a mean age of 44. Of the 2,699 patients, 39% (1,053) were found to have a visual acuity of < or =20/40. Spectacles were prescribed for 56% of the patients who required correction other than reading glasses. There was a 19% (513) referral rate for glaucoma. There was a 2.7% (73) referral rate for diabetic retinopathy. Two hundred sixteen patients presented with cataracts (8%) and were referred to eye care providers for follow-up evaluations. The 9.9% of patients who were known diabetics (267) were referred to either a general physician familiar with the patient history or, if no general physician was reported by the patient, another local physician for evaluation. Because there were no subspecialists in these local communities, the 10% of the patients (270) who were undiagnosed diabetics but showed the risk factor of a hemoglobin A1c greater than 7% were referred to a general physician or local emergency room for follow-up care. One thousand fifty-five patients (35.9%) with a blood pressure of greater than 140/90 mmHg were referred to a physician or to the emergency room as indicated either by systolic less than 140 and diastolic greater than 90. Based on the success of the RADGI project, the project was found to be a sound design for implementing a vision care delivery system in economically distressed rural areas that will address health disparities, barriers to health care, health care access, and patient clinical and educational follow-up.
PlantCV v2: Image analysis software for high-throughput plant phenotyping
Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony
2017-01-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576
PlantCV v2: Image analysis software for high-throughput plant phenotyping.
Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony
2017-01-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.
PlantCV v2: Image analysis software for high-throughput plant phenotyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less
PlantCV v2: Image analysis software for high-throughput plant phenotyping
Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...
2017-12-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rockhold, Mark L.
2008-09-26
The objective of Activity 1.B of the Remediation Decision Support (RDS) Project is to compile all available physical and hydraulic property data for sediments from the Hanford Site, to port these data into the Hanford Environmental Information System (HEIS), and to make the data web-accessible to anyone on the Hanford Local Area Network via the so-called Virtual Library. In past years efforts were made by RDS project staff to compile all available physical and hydraulic property data for Hanford sediments and to transfer these data into SoilVision{reg_sign}, a commercial geotechnical software package designed for storing, analyzing, and manipulating soils data.more » Although SoilVision{reg_sign} has proven to be useful, its access and use restrictions have been recognized as a limitation to the effective use of the physical and hydraulic property databases by the broader group of potential users involved in Hanford waste site issues. In order to make these data more widely available and useable, a decision was made to port them to HEIS and to make them web-accessible via a Virtual Library module. In FY08 the objectives of Activity 1.B of the RDS Project were to: (1) ensure traceability and defensibility of all physical and hydraulic property data currently residing in the SoilVision{reg_sign} database maintained by PNNL, (2) transfer the physical and hydraulic property data from the Microsoft Access database files used by SoilVision{reg_sign} into HEIS, which has most recently been maintained by Fluor-Hanford, Inc., (3) develop a Virtual Library module for accessing these data from HEIS, and (4) write a User's Manual for the Virtual Library module. The development of the Virtual Library module was to be performed by a third party under subcontract to Fluor. The intent of these activities is to make the available physical and hydraulic property data more readily accessible and useable by technical staff and operable unit managers involved in waste site assessments and remedial action decisions for Hanford. This status report describes the history of this development effort and progress to date.« less
ERIC Educational Resources Information Center
DuBois, Bryce; Allred, Shorna; Bunting-Howarth, Katherine; Sanderson, Eric W.; Giampieri, Mario
2017-01-01
The Welikeia project and the corresponding free online tool Visionmaker. NYC focus on the historical landscape ecologies of New York City. This article provides a brief introduction to online participatory tools, describes the Visionmaker tool in detail, and offers suggested ways to use the tool for Extension professionals based in and outside New…
Stephens, Martin L.; Barrow, Craig; Andersen, Melvin E.; Boekelheide, Kim; Carmichael, Paul L.; Holsapple, Michael P.; Lafranconi, Mark
2012-01-01
The U.S. National Research Council (NRC) report on “Toxicity Testing in the 21st century” calls for a fundamental shift in the way that chemicals are tested for human health effects and evaluated in risk assessments. The new approach would move toward in vitro methods, typically using human cells in a high-throughput context. The in vitro methods would be designed to detect significant perturbations to “toxicity pathways,” i.e., key biological pathways that, when sufficiently perturbed, lead to adverse health outcomes. To explore progress on the report’s implementation, the Human Toxicology Project Consortium hosted a workshop on 9–10 November 2010 in Washington, DC. The Consortium is a coalition of several corporations, a research institute, and a non-governmental organization dedicated to accelerating the implementation of 21st-century Toxicology as aligned with the NRC vision. The goal of the workshop was to identify practical and scientific ways to accelerate implementation of the NRC vision. The workshop format consisted of plenary presentations, breakout group discussions, and concluding commentaries. The program faculty was drawn from industry, academia, government, and public interest organizations. Most presentations summarized ongoing efforts to modernize toxicology testing and approaches, each with some overlap with the NRC vision. In light of these efforts, the workshop identified recommendations for accelerating implementation of the NRC vision, including greater strategic coordination and planning across projects (facilitated by a steering group), the development of projects that test the proof of concept for implementation of the NRC vision, and greater outreach and communication across stakeholder communities. PMID:21948868
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Gallivan, Jason P; Goodale, Melvyn A
2018-01-01
In 1992, Goodale and Milner proposed a division of labor in the visual pathways of the primate cerebral cortex. According to their account, the ventral pathway, which projects to occipitotemporal cortex, constructs our visual percepts, while the dorsal pathway, which projects to posterior parietal cortex, mediates the visual control of action. Although the framing of the two-visual-system hypothesis has not been without controversy, it is clear that vision for action and vision for perception have distinct computational requirements, and significant support for the proposed neuroanatomic division has continued to emerge over the last two decades from human neuropsychology, neuroimaging, behavioral psychophysics, and monkey neurophysiology. In this chapter, we review much of this evidence, with a particular focus on recent findings from human neuroimaging and monkey neurophysiology, demonstrating a specialized role for parietal cortex in visually guided behavior. But even though the available evidence suggests that dedicated circuits mediate action and perception, in order to produce adaptive goal-directed behavior there must be a close coupling and seamless integration of information processing across these two systems. We discuss such ventral-dorsal-stream interactions and argue that the two pathways play different, yet complementary, roles in the production of skilled behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
Keating, Joseph; Meekers, Dominique; Adewuyi, Alfred
2006-05-03
In response to the growing HIV epidemic in Nigeria, the U.S. Agency for International Development (USAID) initiated the VISION Project, which aimed to increase use of family planning, child survival, and HIV/AIDS services. The VISION Project used a mass-media campaign that focused on reproductive health and HIV/AIDS prevention. This paper assesses to what extent program exposure translates into increased awareness and prevention of HIV/AIDS. This analysis is based on data from the 2002 and 2004 Nigeria (Bauchi, Enugu, and Oyo) Family Planning and Reproductive Health Surveys, which were conducted among adults living in the VISION Project areas. To correct for endogeneity, two-stage logistic regression is used to investigate the effect of program exposure on 1) discussion of HIV/AIDS with a partner, 2) awareness that consistent condom use reduces HIV risk, and 3) condom use at last intercourse. Exposure to the VISION mass media campaign was high: 59%, 47%, and 24% were exposed to at least 1 VISION radio, printed advertisement, or TV program about reproductive health, respectively. The differences in outcome variables between 2002 baseline data and the 2004 follow-up data were small. However, those with high program exposure were almost one and a half (Odds Ratio [O.R.] = 1.47, 95% Confidence Interval [C.I.] 1.01-2.16) times more likely than those with no exposure to have discussed HIV/AIDS with a partner. Those with high program exposure were over twice (O.R. = 2.20, C.I. 1.49-3.25) as likely as those with low exposure to know that condom use can reduce risk of HIV infection. Program exposure had no effect on condom use at last sex. The VISION Project reached a large portion of the population and exposure to mass media programs about reproductive health and HIV prevention topics can help increase HIV/AIDS awareness. Programs that target rural populations, females, and unmarried individuals, and disseminate information on where to obtain condoms, are needed to reduce barriers to condom use. Improvements in HIV/AIDS prevention behaviour are likely to require that these programmatic efforts be continued, scaled up, done in conjunction with other interventions, and targeted towards individuals with specific socio-demographic characteristics.
2010-09-01
expenditures in the United States with most of this cost associated with long term complications of diabetes specifically, retinopathy , nerve damage... retinopathy as a cause of severe vision loss. Nevertheless, diabetes remains the leading cause of new blindness in working-aged adults in the United... diabetic retinopathy evaluations performed by an ophthalmologist or optometrist with a dilated eye examination and the JVN system using digital video
Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes
Pinto-Lopera, Jesús Emilio; S. T. Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo
2016-01-01
Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch’s t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system. PMID:27649198
Modeling and Simulation of Microelectrode-Retina Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckerman, M
2002-11-30
The goal of the retinal prosthesis project is the development of an implantable microelectrode array that can be used to supply visually-driven electrical input to cells in the retina, bypassing nonfunctional rod and cone cells, thereby restoring vision to blind individuals. This goal will be achieved through the study of the fundamentals of electrical engineering, vision research, and biomedical engineering with the aim of acquiring the knowledge needed to engineer a high-density microelectrode-tissue hybrid sensor that will restore vision to millions of blind persons. The modeling and simulation task within this project is intended to address the question how bestmore » to stimulate, and communicate with, cells in the retina using implanted microelectrodes.« less
Roberts, Kasey; Park, Thomas; Elder, Nancy C; Regan, Saundra; Theodore, Sarah N; Mitchell, Monica J; Johnson, Yolanda N
2015-11-01
Urban Health Project (UHP) is a mission and vision-driven summer internship at the University of Cincinnati College of Medicine that places first-year medical students at local community agencies that work with underserved populations. At the completion of their internship, students write Final Intern Reflections (FIRs). Final Intern Reflections written from 1987 to 2012 were read and coded to both predetermined categories derived from the UHP mission and vision statements and new categories created from the data themselves. Comments relating to UHP's mission and vision were found in 47% and 36% of FIRs, respectively. Positive experiences outweighed negative by a factor of eight. Interns reported the following benefits: educational (53%), valuable (25%), rewarding (25%), new (10%), unique (6%), and life-changing (5%). Urban Health Project is successful in providing medical students with enriching experiences with underserved populations that have the potential to change their understanding of vulnerable populations.
Visions of human futures in space and SETI
NASA Astrophysics Data System (ADS)
Wright, Jason T.; Oman-Reagan, Michael P.
2018-04-01
We discuss how visions for the futures of humanity in space and SETI are intertwined, and are shaped by prior work in the fields and by science fiction. This appears in the language used in the fields, and in the sometimes implicit assumptions made in discussions of them. We give examples from articulations of the so-called Fermi Paradox, discussions of the settlement of the Solar System (in the near future) and the Galaxy (in the far future), and METI. We argue that science fiction, especially the campy variety, is a significant contributor to the `giggle factor' that hinders serious discussion and funding for SETI and Solar System settlement projects. We argue that humanity's long-term future in space will be shaped by our short-term visions for who goes there and how. Because of the way they entered the fields, we recommend avoiding the term `colony' and its cognates when discussing the settlement of space, as well as other terms with similar pedigrees. We offer examples of science fiction and other writing that broaden and challenge our visions of human futures in space and SETI. In an appendix, we use an analogy with the well-funded and relatively uncontroversial searches for the dark matter particle to argue that SETI's lack of funding in the national science portfolio is primarily a problem of perception, not inherent merit.
Flight Research and Validation Formerly Experimental Capabilities Supersonic Project
NASA Technical Reports Server (NTRS)
Banks, Daniel
2009-01-01
This slide presentation reviews the work of the Experimental Capabilities Supersonic project, that is being reorganized into Flight Research and Validation. The work of Experimental Capabilities Project in FY '09 is reviewed, and the specific centers that is assigned to do the work is given. The portfolio of the newly formed Flight Research and Validation (FRV) group is also reviewed. The various projects for FY '10 for the FRV are detailed. These projects include: Eagle Probe, Channeled Centerbody Inlet Experiment (CCIE), Supersonic Boundary layer Transition test (SBLT), Aero-elastic Test Wing-2 (ATW-2), G-V External Vision Systems (G5 XVS), Air-to-Air Schlieren (A2A), In Flight Background Oriented Schlieren (BOS), Dynamic Inertia Measurement Technique (DIM), and Advanced In-Flight IR Thermography (AIR-T).
Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems
NASA Technical Reports Server (NTRS)
Liu, Xuan; Furrer, David; Kosters, Jared; Holmes, Jack
2018-01-01
Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness.
Buildings of the Future Scoping Study: A Framework for Vision Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Goins, John D.
2015-02-01
The Buildings of the Future Scoping Study, funded by the U.S. Department of Energy (DOE) Building Technologies Office, seeks to develop a vision for what U.S. mainstream commercial and residential buildings could become in 100 years. This effort is not intended to predict the future or develop a specific building design solution. Rather, it will explore future building attributes and offer possible pathways of future development. Whether we achieve a more sustainable built environment depends not just on technologies themselves, but on how effectively we envision the future and integrate these technologies in a balanced way that generates economic, social,more » and environmental value. A clear, compelling vision of future buildings will attract the right strategies, inspire innovation, and motivate action. This project will create a cross-disciplinary forum of thought leaders to share their views. The collective views will be integrated into a future building vision and published in September 2015. This report presents a research framework for the vision development effort based on a literature survey and gap analysis. This document has four objectives. First, it defines the project scope. Next, it identifies gaps in the existing visions and goals for buildings and discusses the possible reasons why some visions did not work out as hoped. Third, it proposes a framework to address those gaps in the vision development. Finally, it presents a plan for a series of panel discussions and interviews to explore a vision that mitigates problems with past building paradigms while addressing key areas that will affect buildings going forward.« less
Boyle, D M
1994-01-01
To discuss and project cancer care needs and a vision of oncology nursing in the next century. Scholarly, professional, and governmental sources of information. Projections of a changed patient/family profile, social support dilemmas, and a new "hybrid" oncology nurse. Opportunities for nurses, resulting from these projections, include roles as minority needs specialist, director of new care-delivery models, facilitator of intergenerational support teams, overseer of neighborhood-based care systems, multispecialty nursing care provider, cancer care policy activist. Nursing education, community models, and current care-delivery settings will all be affected by the projected changes and will all need to consider adjusting to meet the demands that will be placed on them to facilitate change.
Yu, Ping; Gandhidasan, Senthilkumar; Miller, Alexis A
2010-06-01
The experience of clinicians at two public hospitals in Sydney, Australia, with the introduction and use of an oncology information system (OIS) was examined to extract lessons to guide the introduction of clinical information systems in public hospitals. Semi-structured interviews were conducted with 12 of 15 radiation oncologists employed at the two hospitals. The personnel involved in the decision making process for the introduction of the system were contacted and their decision making process revisited. The transcribed data were analyzed using NVIVO software. Themes emerged included implementation strategies and practices, the radiation oncologists' current use and satisfaction with the OIS, project management and the impact of the OIS on clinical practice. The hospitals had contrasting experiences in their introduction and use of the OIS. Hospital A used the OIS in all aspects of clinical documentation. Its implementation was associated with strong advocacy by the Head of Department, input by a designated project manager, and use and development of the system by all staff, with timely training and support. With no vision of developing a paperless information system, Hospital B used the OIS only for booking and patient tracking. A departmental policy that data entry for the OIS was centrally undertaken by administrative staff distanced clinicians from the system. All the clinicians considered that the OIS should continuously evolve to meet changing clinical needs and departmental quality improvement initiatives. This case study indicates that critical factors for the successful introduction of clinical information systems into hospital environment were an initial clear vision to be paperless, strong clinical leadership and management at the departmental level, committed project management, and involvement of all staff, with appropriate training. Clinician engagement is essential for post-adoption evolution of clinical information systems. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
X-37 Flight Demonstrator Project: Capabilities for Future Space Transportation System Development
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.
2004-01-01
The X-37 Approach and Landing Vehicle (ALTV) is an automated (unmanned) spacecraft designed to reduce technical risk in the descent and landing phases of flight. ALTV mission requirements and Orbital Vehicle (OV) technology research and development (R&D) goals are formulated to validate and mature high-payoff ground and flight technologies such as Thermal Protection Systems (TPS). It has been more than three decades since the Space Shuttle was designed and built. Real-world hardware experience gained through the multitude of X-37 Project activities has expanded both Government and industry knowledge of the challenges involved in developing new generations of spacecraft that can fulfill the Vision for Space Exploration.
A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context
Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco
2014-01-01
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. PMID:24854209
NASA Technical Reports Server (NTRS)
Takallu, M. A.; Glaab, L. J.; Hughes, M. F.; Wong, D. T.; Bartolone, A. P.
2008-01-01
In support of the NASA Aviation Safety Program's Synthetic Vision Systems Project, a series of piloted simulations were conducted to explore and quantify the relationship between candidate Terrain Portrayal Concepts and Guidance Symbology Concepts, specific to General Aviation. The experiment scenario was based on a low altitude en route flight in Instrument Metrological Conditions in the central mountains of Alaska. A total of 18 general aviation pilots, with three levels of pilot experience, evaluated a test matrix of four terrain portrayal concepts and six guidance symbology concepts. Quantitative measures included various pilot/aircraft performance data, flight technical errors and flight control inputs. The qualitative measures included pilot comments and pilot responses to the structured questionnaires such as perceived workload, subjective situation awareness, pilot preferences, and the rare event recognition. There were statistically significant effects found from guidance symbology concepts and terrain portrayal concepts but no significant interactions between them. Lower flight technical errors and increased situation awareness were achieved using Synthetic Vision Systems displays, as compared to the baseline Pitch/Roll Flight Director and Blue Sky Brown Ground combination. Overall, those guidance symbology concepts that have both path based guidance cue and tunnel display performed better than the other guidance concepts.
A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.
Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco
2014-05-20
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth Thomas
2012-02-01
Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970's vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performancemore » improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE's program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: (1) Highly integrated control rooms; (2) Highly automated plant; (3) Integrated operations; (4) Human performance improvement for field workers; and (5) Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth Thomas; Bruce Hallbert
2013-02-01
Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970’s vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performancemore » improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE’s program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: 1. Highly integrated control rooms 2. Highly automated plant 3. Integrated operations 4. Human performance improvement for field workers 5. Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.« less
Causes and prevalence of visual impairment among adults in the United States.
Congdon, Nathan; O'Colmain, Benita; Klaver, Caroline C W; Klein, Ronald; Muñoz, Beatriz; Friedman, David S; Kempen, John; Taylor, Hugh R; Mitchell, Paul
2004-04-01
To estimate the cause-specific prevalence and distribution of blindness and low vision in the United States by age, race/ethnicity, and gender, and to estimate the change in these prevalence figures over the next 20 years. Summary prevalence estimates of blindness (both according to the US definition of < or =6/60 [< or =20/200] best-corrected visual acuity in the better-seeing eye and the World Health Organization standard of < 6/120 [< 20/400]) and low vision (< 6/12 [< 20/40] best-corrected vision in the better-seeing eye) were prepared separately for black, Hispanic, and white persons in 5-year age intervals starting at 40 years. The estimated prevalences were based on recent population-based studies in the United States, Australia, and Europe. These estimates were applied to 2000 US Census data, and to projected US population figures for 2020, to estimate the number of Americans with visual impairment. Cause-specific prevalences of blindness and low vision were also estimated for the different racial/ethnic groups. Based on demographics from the 2000 US Census, an estimated 937 000 (0.78%) Americans older than 40 years were blind (US definition). An additional 2.4 million Americans (1.98%) had low vision. The leading cause of blindness among white persons was age-related macular degeneration (54.4% of the cases), while among black persons, cataract and glaucoma accounted for more than 60% of blindness. Cataract was the leading cause of low vision, responsible for approximately 50% of bilateral vision worse than 6/12 (20/40) among white, black, and Hispanic persons. The number of blind persons in the US is projected to increase by 70% to 1.6 million by 2020, with a similar rise projected for low vision. Blindness or low vision affects approximately 1 in 28 Americans older than 40 years. The specific causes of visual impairment, and especially blindness, vary greatly by race/ethnicity. The prevalence of visual disabilities will increase markedly during the next 20 years, owing largely to the aging of the US population.
Hi-Vision telecine system using pickup tube
NASA Astrophysics Data System (ADS)
Iijima, Goro
1992-08-01
Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.
State highways as main streets : a study of community design and visioning.
DOT National Transportation Integrated Search
2009-10-01
The objectives for this project were to explore community transportation design policy to improve collaboration when state highways serve as local main streets, determine successful approaches to meet the federal requirements for visioning set forth ...
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
Bagayoko, C-O; Bediang, G; Anne, A; Niang, M; Traoré, A-K; Geissbuhler, A
2017-11-01
It is generally agreed today that digital technology provides a lever for improving access to health care, care processes, and public health planning and activities such as education and prevention. Its use in countries that have reached a given level of development has taken place in a somewhat fragmented manner that raises important interoperability problems and sometimes makes synergy impossible between the different projects of digital health. This may be linked to several factors, principally the lack of a global vision of digital health, and inadequate methodological knowledge that prevents the development and implementation of this vision. The countries of Africa should be able to profit from these errors from the beginnings of digital health, by moving toward systemic approaches, known standards, and tools appropriate to the realities on the ground. The aim of this work is to present the methodological approaches as well as the principal results of two relatively new centers of expertise in Mali and Cameroon intended to cultivate this vision of digital governance in the domain of health and to train professionals to implement the projects. Both centers were created due to initiatives of organizations of civil society. The center in Mali developed toward an economic interest group and then to collaboration with healthcare and university organizations. The same process is underway at the Cameroon center. The principal results from these centers can be enumerated under different aspects linked to research, development, training, and implementation of digital health tools. They have produced dozens of scientific publications, doctoral dissertations, theses, and papers focused especially on subjects such as the medicoeconomic evaluation tools of e-health and health information technology systems. In light of these results, we can conclude that these two centers of expertise have well and truly been established. Their role may be decisive in the local training of participants, the culture of good governance of digital health projects, the development of operational strategies, and the implementation of projects.
Roberts, L.N.; Biewick, L.R.
1999-01-01
This report documents a comparison of two methods of resource calculation that are being used in the National Coal Resource Assessment project of the U.S. Geological Survey (USGS). Tewalt (1998) discusses the history of using computer software packages such as GARNET (Graphic Analysis of Resources using Numerical Evaluation Techniques), GRASS (Geographic Resource Analysis Support System), and the vector-based geographic information system (GIS) ARC/INFO (ESRI, 1998) to calculate coal resources within the USGS. The study discussed here, compares resource calculations using ARC/INFO* (ESRI, 1998) and EarthVision (EV)* (Dynamic Graphics, Inc. 1997) for the coal-bearing John Henry Member of the Straight Cliffs Formation of Late Cretaceous age in the Kaiparowits Plateau of southern Utah. Coal resource estimates in the Kaiparowits Plateau using ARC/INFO are reported in Hettinger, and others, 1996.
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Development of dog-like retrieving capability in a ground robot
NASA Astrophysics Data System (ADS)
MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary
2013-01-01
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.
Multiparameter vision testing apparatus
NASA Technical Reports Server (NTRS)
Hunt, S. R., Jr.; Homkes, R. J.; Poteate, W. B.; Sturgis, A. C. (Inventor)
1975-01-01
Compact vision testing apparatus is described for testing a large number of physiological characteristics of the eyes and visual system of a human subject. The head of the subject is inserted into a viewing port at one end of a light-tight housing containing various optical assemblies. Visual acuity and other refractive characteristics and ocular muscle balance characteristics of the eyes of the subject are tested by means of a retractable phoroptor assembly carried near the viewing port and a film cassette unit carried in the rearward portion of the housing (the latter selectively providing a variety of different visual targets which are viewed through the optical system of the phoroptor assembly). The visual dark adaptation characteristics and absolute brightness threshold of the subject are tested by means of a projector assembly which selectively projects one or both of a variable intensity fixation target and a variable intensity adaptation test field onto a viewing screen located near the top of the housing.
Basic design principles of colorimetric vision systems
NASA Astrophysics Data System (ADS)
Mumzhiu, Alex M.
1998-10-01
Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.
NASA Runway Incursion Prevention System (RIPS) Dallas-Fort Worth Demonstration Performance Analysis
NASA Technical Reports Server (NTRS)
Cassell, Rick; Evers, Carl; Esche, Jeff; Sleep, Benjamin; Jones, Denise R. (Technical Monitor)
2002-01-01
NASA's Aviation Safety Program Synthetic Vision System project conducted a Runway Incursion Prevention System (RIPS) flight test at the Dallas-Fort Worth International Airport in October 2000. The RIPS research system includes advanced displays, airport surveillance system, data links, positioning system, and alerting algorithms to provide pilots with enhanced situational awareness, supplemental guidance cues, a real-time display of traffic information, and warnings of runway incursions. This report describes the aircraft and ground based runway incursion alerting systems and traffic positioning systems (Automatic Dependent Surveillance - Broadcast (ADS-B) and Traffic Information Service - Broadcast (TIS-B)). A performance analysis of these systems is also presented.
Quality Control by Artificial Vision
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.
2010-01-01
Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less
Computational approaches to vision
NASA Technical Reports Server (NTRS)
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
1997-12-01
Science Foundation, the development of a topography of STI systems for the Library of Congress, and the development of a system to provide input to...Information System’s Database and a project to develop a reference catalog of Internet resources in area studies. She is consultant to foreign and...interface development for non-US data. Prior to this, she served as the Director of Corporate Librarian Relations, developing marketing, support, and new
Energy Storage: Batteries and Fuel Cells for Exploration
NASA Technical Reports Server (NTRS)
Manzo, Michelle A.; Miller, Thomas B.; Hoberecht, Mark A.; Baumann, Eric D.
2007-01-01
NASA's Vision for Exploration requires safe, human-rated, energy storage technologies with high energy density, high specific energy and the ability to perform in a variety of unique environments. The Exploration Technology Development Program is currently supporting the development of battery and fuel cell systems that address these critical technology areas. Specific technology efforts that advance these systems and optimize their operation in various space environments are addressed in this overview of the Energy Storage Technology Development Project. These technologies will support a new generation of more affordable, more reliable, and more effective space systems.
Improving Robotic Assembly of Planar High Energy Density Targets
NASA Astrophysics Data System (ADS)
Dudt, D.; Carlson, L.; Alexander, N.; Boehm, K.
2016-10-01
Increased quantities of planar assemblies for high energy density targets are needed with higher shot rates being implemented at facilities such as the National Ignition Facility and the Matter in Extreme Conditions station of the Linac Coherent Light Source. To meet this growing demand, robotics are used to reduce assembly time. This project studies how machine vision and force feedback systems can be used to improve the quantity and quality of planar target assemblies. Vision-guided robotics can identify and locate parts, reducing laborious manual loading of parts into precision pallets and associated teaching of locations. On-board automated inspection can measure part pickup offsets to correct part drop-off placement into target assemblies. Force feedback systems can detect pickup locations and apply consistent force to produce more uniform glue bond thickness, thus improving the performance of the targets. System designs and performance evaluations will be presented. Work supported in part by the US DOE under the Science Undergraduate Laboratory Internships Program (SULI) and ICF Target Fabrication DE-NA0001808.
ROVER: A prototype active vision system
NASA Astrophysics Data System (ADS)
Coombs, David J.; Marsh, Brian D.
1987-08-01
The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
NASA Technical Reports Server (NTRS)
Sepulveda, Jose A.
1992-01-01
The stated purposes of the Management Science Faculty Fellowship Project were to: (1) provide a comprehensive analysis of KSC management training for engineers and other management professionals from project/program lead through executive levels; and (2) development of evaluation methodologies which can be used to perform ongoing program-wide course-to-course assessments. This report will focus primarily in the first stated purpose for the project. Ideally, the analysis of KSC management training will build in the current system and efficiently propose improvements to achieve existing goals and objectives while helping to identify new visions and new outcomes for the Center's Management Training Mission. Section 2 describes the objectives, approach, and specific tasks used to analyze KSC's Management training System. Section 3 discusses the main conclusions derived from an analysis of the available training data. Section 4 discusses the characteristics and benefits envisioned for a Management Training System. Section 5 proposes a Training System as identified by the results of a Needs Assessment exercise conducted at KSC this summer. Section 6 presents a number of recommendations for future work.
Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System
2015-03-26
camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene
1988-04-01
Official Interviewed on Navy’s Role , Prospects 19 GREECE Details on Production of Artemis-30 Antiaircraft System 20 High Cost Pointed Out 20...face. I would stress that it is impossible for all the mixed intermuni- cipal projects to become pure ones. [Question] Is the regional issue a...providing an enthusiastic vision of Wallonia, dynamic and rejuvenated. I would stress that the demographic problem is enormously important. [Question
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
A vision and strategy for the virtual physiological human: 2012 update
Hunter, Peter; Chapman, Tara; Coveney, Peter V.; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F.; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Shublaq, Nour; Skår, John; Stroetmann, Karl; Tegner, Jesper; Thomas, S. Randall; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H. G. M.; Viceconti, Marco
2013-01-01
European funding under Framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for 5 years. The VPH Network of Excellence (NoE) has been set up to help develop common standards, open source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also working to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by the FP6 STEP project in 2006. In 2010, we wrote an assessment of the accomplishments of the first two years of the VPH in which we considered the biomedical science, healthcare and information and communications technology challenges facing the project (Hunter et al. 2010 Phil. Trans. R. Soc. A 368, 2595–2614 (doi:10.1098/rsta.2010.0048)). We proposed that a not-for-profit professional umbrella organization, the VPH Institute, should be established as a means of sustaining the VPH vision beyond the time-frame of the NoE. Here, we update and extend this assessment and in particular address the following issues raised in response to Hunter et al.: (i) a vision for the VPH updated in the light of progress made so far, (ii) biomedical science and healthcare challenges that the VPH initiative can address while also providing innovation opportunities for the European industry, and (iii) external changes needed in regulatory policy and business models to realize the full potential that the VPH has to offer to industry, clinics and society generally. PMID:24427536
A vision and strategy for the virtual physiological human: 2012 update.
Hunter, Peter; Chapman, Tara; Coveney, Peter V; de Bono, Bernard; Diaz, Vanessa; Fenner, John; Frangi, Alejandro F; Harris, Peter; Hose, Rod; Kohl, Peter; Lawford, Pat; McCormack, Keith; Mendes, Miriam; Omholt, Stig; Quarteroni, Alfio; Shublaq, Nour; Skår, John; Stroetmann, Karl; Tegner, Jesper; Thomas, S Randall; Tollis, Ioannis; Tsamardinos, Ioannis; van Beek, Johannes H G M; Viceconti, Marco
2013-04-06
European funding under Framework 7 (FP7) for the virtual physiological human (VPH) project has been in place now for 5 years. The VPH Network of Excellence (NoE) has been set up to help develop common standards, open source software, freely accessible data and model repositories, and various training and dissemination activities for the project. It is also working to coordinate the many clinically targeted projects that have been funded under the FP7 calls. An initial vision for the VPH was defined by the FP6 STEP project in 2006. In 2010, we wrote an assessment of the accomplishments of the first two years of the VPH in which we considered the biomedical science, healthcare and information and communications technology challenges facing the project (Hunter et al. 2010 Phil. Trans. R. Soc. A 368, 2595-2614 (doi:10.1098/rsta.2010.0048)). We proposed that a not-for-profit professional umbrella organization, the VPH Institute, should be established as a means of sustaining the VPH vision beyond the time-frame of the NoE. Here, we update and extend this assessment and in particular address the following issues raised in response to Hunter et al.: (i) a vision for the VPH updated in the light of progress made so far, (ii) biomedical science and healthcare challenges that the VPH initiative can address while also providing innovation opportunities for the European industry, and (iii) external changes needed in regulatory policy and business models to realize the full potential that the VPH has to offer to industry, clinics and society generally.
Multiple-camera tracking: UK government requirements
NASA Astrophysics Data System (ADS)
Hosmer, Paul
2007-10-01
The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.
Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, E.; Denholm, P.; Margolis, R.
2013-01-01
The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.
A mixed reality approach for stereo-tomographic quantification of lung nodules.
Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge
2016-05-25
To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.
Characterization of flotation color by machine vision
NASA Astrophysics Data System (ADS)
Siren, Ari
1999-09-01
Flotation is the most common industrial method by which valuable minerals are separated from waste rock after crushing and grinding the ore. For process control, flotation plants and devices are equipped with conventional and specialized sensors. However, certain variables are left to the visual observation of the operator, such as the color of the froth and the size of the bubbles in the froth. The ChaCo-Project (EU-Project 24931) was launched in November 1997. In this project a measuring station was built at the Pyhasalmi flotation plant. The system includes an RGB camera and a spectral color measuring instrument for the color inspection of the flotation. The RGB camera or visible spectral range is also measured to compare the operators' comments on the color of the froth relating to the sphalerite concentration and the process balance. Different dried mineral (sphalerite) ratios were studied with iron pyrite to find out about the minerals' typical spectral features. The correlation between sphalerite spectral reflectance and sphalerite concentration over various wavelengths are used to select the proper camera system with filters or to compare the results with the color information from the RGB camera. Various machine vision candidate techniques are discussed for this application and the preprocessed information of the dried mineral colors is used and adapted to the online measuring station. Moving froth bubbles produce total reflections, disturbing the color information. Polarization filters are used and the results are reported. Also the reflectance outside the visible light is studied and reported.
The Role of the Community Nurse in Promoting Health and Human Dignity-Narrative Review Article
Muntean, Ana; Tomita, Mihaela; Ungureanu, Roxana
2013-01-01
Abstract Background: Population health, as defined by WHO in its constitution, is out “a physical, mental and social complete wellbeing”. At the basis of human welfare is the human dignity. This dimension requires an integrated vision of health care. The ecosystemical vision of Bronfenbrenner allows highlighting the unexpected connections between social macro system based on values and the micro system consisting of individual and family. Community nurse is aimed to transgression in practice of education and care, the respect for human dignity, the bonds among values and practices of the community and the physical health of individuals. In Romania, the promotion of community nurse began in 2002, through the project promoting the social inclusion by developing human and institutional resources within community nursery of the National School of Public Health, Management and Education in Healthcare Bucharest. The community nurse became apparent in 10 counties included in the project. Considering the respect for human dignity as an axiomatic value for the community nurse interventions, we stress the need for developing a primary care network in Romania. The proof is based on the analysis of the concept of human dignity within health care, as well as the secondary analysis of health indicators, in the year of 2010, of the 10 counties included in the project. Our conclusions will draw attention to the need of community nurse and, will open directions for new researches and developments needed to promote primary health in Romania. PMID:26060614
Holodeck: Telepresence Dome Visualization System Simulations
NASA Technical Reports Server (NTRS)
Hite, Nicolas
2012-01-01
This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.
Wearable Improved Vision System for Color Vision Deficiency Correction
Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria
2017-01-01
Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827
How (and why) the visual control of action differs from visual perception
Goodale, Melvyn A.
2014-01-01
Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions. PMID:24789899
Extrafoveal Video Extension for an Immersive Viewing Experience.
Turban, Laura; Urban, Fabrice; Guillotel, Philippe
2016-02-11
Between the recent popularity of virtual reality (VR) and the development of 3D, immersion has become an integral part of entertainment concepts. Head-mounted Display (HMD) devices are often used to afford users a feeling of immersion in the environment. Another technique is to project additional material surrounding the viewer, as is achieved using cave systems. As a continuation of this technique, it could be interesting to extend surrounding projection to current television or cinema screens. The idea would be to entirely fill the viewer's field of vision, thus providing them with a more complete feeling of being in the scene and part of the story. The appropriate content can be captured using large field of view (FoV) technology, using a rig of cameras for 110 to 360 capture, or created using computergenerated images. The FoV is, however, rather limited in its use for existing (legacy) content, achieving between 36 to 90 degrees () field, depending on the distance from the screen. This paper seeks to improve this FoV limitation by proposing computer vision techniques to extend such legacy content to the peripheral (extrafoveal) vision without changing the original creative intent or damaging the viewer's experience. A new methodology is also proposed for performing user tests in order to evaluate the quality of the experience and confirm that the sense of immersion has been increased. This paper thus presents: i) an algorithm to spatially extend the video based on human vision characteristics, ii) its subjective results compared to state-of-the-art techniques, iii) the protocol required to evaluate the quality of the experience (QoE), and iv) the results of the user tests.
Welinder, Lotte G; Baggesen, Kirsten L
2012-12-01
To investigate the visual abilities of students with severe developmental delay (DD) age 6-8 starting in special needs education. Between 1 January 2000 and 31 December 2008, we screened all students with severe DD starting in special needs schools in Northern Jutland, Denmark for vision. All students with visual acuities ≤6/12 were refractioned and examined by an ophthalmologist. Of 502 students, 56 (11%) had visual impairment (VI) [visual acuity (VA) ≤ 6/18], of which 21 had been previously undiagnosed. Legal blindness was found in 15 students (3%), of whom three had previously been undiagnosed. Students tested with preferential looking systems (N = 78) had significantly lower visual acuities [VA (decimal) = 0.55] than students tested with ortho types [VA (decimal) = 0.91] and had problems participating in the colour and form tests, possibly due to cerebral VI. The number of students with decreased vision identified by screening decreased significantly during the study period (r = 0.724, p = 0.028). The number of students needed to be screened to find one student with VI was 24 and to identify legal blindness 181 needed to be screened. Visual impairment is a common condition in students with severe DD. Despite increased awareness of VI in the school and health care system, we continued to find a considerable number of students with hitherto undiagnosed decreased vision. © 2011 The Authors. Acta Ophthalmologica © 2011 Acta Ophthalmologica Scandinavica Foundation.
Vision Voice: A Multimedia Exploration of Diabetes and Vision Loss in East Harlem.
Ives, Brett; Nedelman, Michael; Redwood, Charysse; Ramos, Michelle A; Hughson-Andrade, Jessica; Hernandez, Evelyn; Jordan, Dioris; Horowitz, Carol R
2015-01-01
East Harlem, New York, is a community actively struggling with diabetes and its complications, including vision-related conditions that can affect many aspects of daily life. Vision Voice was a qualitative community-based participatory research (CBPR) study that intended to better understand the needs and experiences of people living with diabetes, other comorbid chronic illnesses, and vision loss in East Harlem. Using photovoice methodology, four participants took photographs, convened to review their photographs, and determined overarching themes for the group's collective body of work. Identified themes included effect of decreased vision function on personal independence/mobility and self-management of chronic conditions and the importance of informing community members and health care providers about these issues. The team next created a documentary film that further develops the narratives of the photovoice participants. The Vision Voice photovoice project was an effective tool to assess community needs, educate and raise awareness.
System Software Framework for System of Systems Avionics
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Peterson, Benjamin L; Thompson, Hiram C.
2005-01-01
Project Constellation implements NASA's vision for space exploration to expand human presence in our solar system. The engineering focus of this project is developing a system of systems architecture. This architecture allows for the incremental development of the overall program. Systems can be built and connected in a "Lego style" manner to generate configurations supporting various mission objectives. The development of the avionics or control systems of such a massive project will result in concurrent engineering. Also, each system will have software and the need to communicate with other (possibly heterogeneous) systems. Fortunately, this design problem has already been solved during the creation and evolution of systems such as the Internet and the Department of Defense's successful effort to standardize distributed simulation (now IEEE 1516). The solution relies on the use of a standard layered software framework and a communication protocol. A standard framework and communication protocol is suggested for the development and maintenance of Project Constellation systems. The ARINC 653 standard is a great start for such a common software framework. This paper proposes a common system software framework that uses the Real Time Publish/Subscribe protocol for framework-to-framework communication to extend ARINC 653. It is highly recommended that such a framework be established before development. This is important for the success of concurrent engineering. The framework provides an infrastructure for general system services and is designed for flexibility to support a spiral development effort.
1974-08-31
vision, guidance and outstanding direction of Ouida C. Upchurch, Capt., NC, USN, Project Manager. 1 TABLE OF CONTENTS Unit Page IA INTRODUCTION TO THE...DEPARTMENT .. ......... ... 19 ICI. Orientation to the Navy Medical Department ... ...... 21 ID INTRODUCTION TO OBSERVATION, COMMUNICATION, AND INSTRUCTIONAL...Surgical Wounds: Irrigations .... ............ ... 597 IR8. Surgical Wounds: Suture Removal .. .......... ... 601 vi AhI TRAINING UNIT IA INTRODUCTION TO
2016-11-01
The instructor was Prof. Fei-Fei Li, who is well known and is a leader in the computer vision community. All of the course materials were made...Systems Center Pacific (SSC Pacific). The machine learning community began organizing itself in 2012, which inspired a group of people to study an online...labor for the participants to study the material alongside their project work. This report documents the activities of the course along with some
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
Design Environment for Novel Vertical Lift Vehicles: DELIVER
NASA Technical Reports Server (NTRS)
Theodore, Colin
2016-01-01
This is a 20 minute presentation discussing the DELIVER vision. DELIVER is part of the ARMD Transformative Aeronautics Concepts Program, particularly the Convergent Aeronautics Solutions Project. The presentation covers the DELIVER vision, transforming markets, conceptual design process, challenges addressed, technical content, and FY2016 key activities.
ERIC Educational Resources Information Center
Shoultz, Jan; Kooker, Barbara Molina; Sloat, Ann R.
1998-01-01
In Hawaii, one of four national "vision for nursing education" projects focused on identifying themes for a community-based curriculum. Focus groups selected nursing history, culture, identity, knowledge, and practice as well as cross-disciplinary themes. (SK)
Pedagogical Possibilities for Unruly Bodies
ERIC Educational Resources Information Center
Rice, Carla; Chandler, Eliza; Liddiard, Kirsty; Rinaldi, Jen; Harrison, Elisabeth
2018-01-01
Project Re-Vision uses disability arts to disrupt stereotypical understandings of disability and difference that create barriers to healthcare. In this paper, we examine how digital stories produced through Re-Vision disrupt biopedagogies by working as body-becoming pedagogies to create non-didactic possibilities for living in/with difference. We…
Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B
2014-01-01
When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.
NASA Project Constellation Systems Engineering Approach
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.
2005-01-01
NASA's Office of Exploration Systems (OExS) is organized to empower the Vision for Space Exploration with transportation systems that result in achievable, affordable, and sustainable human and robotic journeys to the Moon, Mars, and beyond. In the process of delivering these capabilities, the systems engineering function is key to implementing policies, managing mission requirements, and ensuring technical integration and verification of hardware and support systems in a timely, cost-effective manner. The OExS Development Programs Division includes three main areas: (1) human and robotic technology, (2) Project Prometheus for nuclear propulsion development, and (3) Constellation Systems for space transportation systems development, including a Crew Exploration Vehicle (CEV). Constellation Systems include Earth-to-orbit, in-space, and surface transportation systems; maintenance and science instrumentation; and robotic investigators and assistants. In parallel with development of the CEV, robotic explorers will serve as trailblazers to reduce the risk and costs of future human operations on the Moon, as well as missions to other destinations, including Mars. Additional information is included in the original extended abstract.
Space Station automation and robotics
NASA Technical Reports Server (NTRS)
1987-01-01
A group of fifteen students in the Electrical Engineering Department at the University of Maryland, College Park, has been involved in a design project under the sponsorship of NASA Headquarters, NASA Goddard Space Flight Center and the Systems Research Center (SRC) at UMCP. The goal of the NASA/USRA project was to first obtain a refinement of the design work done in Spring 1986 on the proposed Mobile Remote Manipulator System (MRMS) for the Space Station. This was followed by design exercises involving the OMV and two armed service vehicle. Three students worked on projects suggested by NASA Goddard scientists for ten weeks this past summer. The knowledge gained from the summer design exercise has been used to improve our current design of the MRMS. To this end, the following program was undertaken for the Fall semester 1986: (1) refinement of the MRMS design; and (2) addition of vision capability to our design.
Drosou, A.; Ioannidis, D.; Moustakas, K.; Tzovaras, D.
2011-01-01
Unobtrusive Authentication Using ACTIvity-Related and Soft BIOmetrics (ACTIBIO) is an EU Specific Targeted Research Project (STREP) where new types of biometrics are combined with state-of-the-art unobtrusive technologies in order to enhance security in a wide spectrum of applications. The project aims to develop a modular, robust, multimodal biometrics security authentication and monitoring system, which uses a biodynamic physiological profile, unique for each individual, and advancements of the state of the art in unobtrusive behavioral and other biometrics, such as face, gait recognition, and seat-based anthropometrics. Several shortcomings of existing biometric recognition systems are addressed within this project, which have helped in improving existing sensors, in developing new algorithms, and in designing applications, towards creating new, unobtrusive, biometric authentication procedures in security-sensitive, Ambient Intelligence environments. This paper presents the concept of the ACTIBIO project and describes its unobtrusive authentication demonstrator in a real scenario by focusing on the vision-based biometric recognition modalities. PMID:21380485
Drosou, A; Ioannidis, D; Moustakas, K; Tzovaras, D
2011-03-01
Unobtrusive Authentication Using ACTIvity-Related and Soft BIOmetrics (ACTIBIO) is an EU Specific Targeted Research Project (STREP) where new types of biometrics are combined with state-of-the-art unobtrusive technologies in order to enhance security in a wide spectrum of applications. The project aims to develop a modular, robust, multimodal biometrics security authentication and monitoring system, which uses a biodynamic physiological profile, unique for each individual, and advancements of the state of the art in unobtrusive behavioral and other biometrics, such as face, gait recognition, and seat-based anthropometrics. Several shortcomings of existing biometric recognition systems are addressed within this project, which have helped in improving existing sensors, in developing new algorithms, and in designing applications, towards creating new, unobtrusive, biometric authentication procedures in security-sensitive, Ambient Intelligence environments. This paper presents the concept of the ACTIBIO project and describes its unobtrusive authentication demonstrator in a real scenario by focusing on the vision-based biometric recognition modalities.
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
Hand-Eye Calibration of Robonaut
NASA Technical Reports Server (NTRS)
Nickels, Kevin; Huber, Eric
2004-01-01
NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.
Erwin, Katherine; Blumenthal, Daniel S; Chapel, Thomas; Allwood, L Vernon
2004-11-01
We evaluated collaboration among academic and community partners in a program to recruit African American youth into the health professions. Six institutions of higher education, an urban school system, two community organizations, and two private enterprises became partners to create a health career pipeline for this population. The pipeline consisted of 14 subprograms designed to enrich academic science curricula, stimulate the interest of students in health careers, and facilitate entry into professional schools and other graduate-level educational programs. Subprogram directors completed questionnaires regarding a sense of common mission/vision and coordination/collaboration three times during the 3-year project. The partners strongly shared a common mission and vision throughout the duration of the program, although there was some weakening in the last phase. Subprogram directors initially viewed coordination/collaboration as weak, but by midway through the project period viewed it as stronger. Feared loss of autonomy was foremost among several factors that threatened collaboration among the partners. Collaboration was improved largely through a process of building trust among the partners.
NASA Technical Reports Server (NTRS)
Gibbel, Mark; Bellamy, Marvin; DeSantis, Charlie; Hess, John; Pattok, Tracy; Quintero, Andrew; Silver, R.
1996-01-01
ESS 2000 has the vision of enhancing the knowledge necessary to implement cost-effective, leading-edge ESS technologies and procedures in order to increase U.S. electronics industry competitiveness. This paper defines EES and discusses the factors driving the project, the objectives of the project, its participants, the three phases of the project, the technologies involved, and project deliverables.
Smart mobile robot system for rubbish collection
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; Sien Siang, Tan
2018-03-01
This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.
Contextualising and Analysing Planetary Rover Image Products through the Web-Based PRoGIS
NASA Astrophysics Data System (ADS)
Morley, Jeremy; Sprinks, James; Muller, Jan-Peter; Tao, Yu; Paar, Gerhard; Huber, Ben; Bauer, Arnold; Willner, Konrad; Traxler, Christoph; Garov, Andrey; Karachevtseva, Irina
2014-05-01
The international planetary science community has launched, landed and operated dozens of human and robotic missions to the planets and the Moon. They have collected various surface imagery that has only been partially utilized for further scientific purposes. The FP7 project PRoViDE (Planetary Robotics Vision Data Exploitation) is assembling a major portion of the imaging data gathered so far from planetary surface missions into a unique database, bringing them into a spatial context and providing access to a complete set of 3D vision products. Processing is complemented by a multi-resolution visualization engine that combines various levels of detail for a seamless and immersive real-time access to dynamically rendered 3D scenes. PRoViDE aims to (1) complete relevant 3D vision processing of planetary surface missions, such as Surveyor, Viking, Pathfinder, MER, MSL, Phoenix, Huygens, and Lunar ground-level imagery from Apollo, Russian Lunokhod and selected Luna missions, (2) provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these sites to embed the robotic imagery and its products into spatial planetary context, (3) collect 3D Vision processing and remote sensing products within a single coherent spatial data base, (4) realise seamless fusion between orbital and ground vision data, (5) demonstrate the potential of planetary surface vision data by maximising image quality visualisation in 3D publishing platform, (6) collect and formulate use cases for novel scientific application scenarios exploiting the newly introduced spatial relationships and presentation, (7) demonstrate the concepts for MSL, (9) realize on-line dissemination of key data & its presentation by a web-based GIS and rendering tool named PRoGIS (Planetary Robotics GIS). PRoGIS is designed to give access to rover image archives in geographical context, using projected image view cones, obtained from existing meta-data and updated according to processing results, as a means to interact with and explore the archive. However PRoGIS is more than a source data explorer. It is linked to the PRoVIP (Planetary Robotics Vision Image Processing) system which includes photogrammetric processing tools to extract terrain models, compose panoramas, and explore and exploit multi-view stereo (where features on the surface have been imaged from different rover stops). We have started with the Opportunity MER rover as our test mission but the system is being designed to be multi-mission, taking advantage in particular of UCL MSSL's PDS mirror, and we intend to at least deal with both MER rovers and MSL. For the period of ProViDE until end of 2015 the further intent is to handle lunar and other Martian rover & descent camera data. The presentation discusses the challenges of integrating rover and orbital derived data into a single geographical framework, especially reconstructing view cones; our human-computer interaction intentions in creating an interface to the rover data that is accessible to planetary scientists; how we handle multi-mission data in the database; and a demonstration of the resulting system & its processing capabilities. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
Comparative Geometrical Investigations of Hand-Held Scanning Systems
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Przybilla, H.-J.; Lindstaedt, M.; Tschirschwitz, F.; Misgaiski-Hass, M.
2016-06-01
An increasing number of hand-held scanning systems by different manufacturers are becoming available on the market. However, their geometrical performance is little-known to many users. Therefore the Laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has carried out geometrical accuracy tests with the following systems in co-operation with the Bochum University of Applied Sciences (Laboratory for Photogrammetry) as well as the Humboldt University in Berlin (Institute for Computer Science): DOTProduct DPI-7, Artec Spider, Mantis Vision F5 SR, Kinect v1 + v2, Structure Sensor and Google's Project Tango. In the framework of these comparative investigations geometrically stable reference bodies were used. The appropriate reference data were acquired by measurement with two structured light projection systems (AICON smartSCAN and GOM ATOS I 2M). The comprehensive test results of the different test scenarios are presented and critically discussed in this contribution.
DOT National Transportation Integrated Search
2008-12-01
The I-95 Corridor Coalitions Vision project is a departure from the Coalitions historic role that focused primarily on shorter-term operational improvements in the corridor. In the past, most of the day-to-day issues confronting the Coalition m...
HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.
Lin, Huei-Yung; Wang, Min-Liang
2014-09-04
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots
Lin, Huei-Yung.; Wang, Min-Liang.
2014-01-01
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317
NASA's Global Imagery Browse Services - Technologies for Visualizing Earth Science Data
NASA Astrophysics Data System (ADS)
Cechini, M. F.; Boller, R. A.; Baynes, K.; Schmaltz, J. E.; Thompson, C. K.; Roberts, J. T.; Rodriguez, J.; Wong, M. M.; King, B. A.; King, J.; De Luca, A. P.; Pressley, N. N.
2017-12-01
For more than 20 years, the NASA Earth Observing System (EOS) has collected earth science data for thousands of scientific parameters now totaling nearly 15 Petabytes of data. In 2013, NASA's Global Imagery Browse Services (GIBS) formed its vision to "transform how end users interact and discover [EOS] data through visualizations." This vision included leveraging scientific and community best practices and standards to provide a scalable, compliant, and authoritative source for EOS earth science data visualizations. Since that time, GIBS has grown quickly and now services millions of daily requests for over 500 imagery layers representing hundreds of earth science parameters to a broad community of users. For many of these parameters, visualizations are available within hours of acquisition from the satellite. For others, visualizations are available for the entire mission of the satellite. The GIBS system is built upon the OnEarth and MRF open source software projects, which are provided by the GIBS team. This software facilitates standards-based access for compliance with existing GIS tools. The GIBS imagery layers are predominantly rasterized images represented in two-dimensional coordinate systems, though multiple projections are supported. The OnEarth software also supports the GIBS ingest pipeline to facilitate low latency updates to new or updated visualizations. This presentation will focus on the following topics: Overview of GIBS visualizations and user community Current benefits and limitations of the OnEarth and MRF software projects and related standards GIBS access methods and their in/compatibilities with existing GIS libraries and applications Considerations for visualization accuracy and understandability Future plans for more advanced visualization concepts including Vertical Profiles and Vector-Based Representations Future plans for Amazon Web Service support and deployments
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Interdisciplinary multisensory fusion: design lessons from professional architects
NASA Astrophysics Data System (ADS)
Geiger, Ray W.; Snell, J. T.
1992-11-01
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. Human efficacy for innovation in architectural design has always reflected more the projected perceptual vision of the designer visa vis the hierarchical spirit of the design process. In pursuing better ways to build and/or design things, we have found surprising success in exploring certain more esoteric applications. One of those applications is the vision of an artistic approach in/and around creative problem solving. Our evaluation in research into vision and visual systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and interdisciplinary design practice. Discussion will cover areas of cognitive ergonomics, natural modeling sources, and an open architectural process of means and goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process. One hypothesis is that the kinematic simulation of perceived connections between hard and soft sciences, centering on the life sciences and life in general, has become a very effective foundation for design theory and application.
Models of Speed Discrimination
NASA Technical Reports Server (NTRS)
1997-01-01
The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noakes, Mark W; Garcia, Pablo; Rosen, Jacob
The Trauma Pod (TP) vision is to develop a rapidly deployable robotic system to perform critical acute stabilization and/or surgical procedures autonomously or in a teleoperative mode on wounded soldiers in the battlefield who might otherwise die before treatment in a combat hospital can be provided. In the first phase of a project pursuing this vision, a robotic TP system was developed and its capability demonstrated by performing select surgical procedures on a patient phantom. The system demonstrates the feasibility of performing acute stabilization procedures with the patient being the only human in the surgical cell. The teleoperated surgical robotmore » is supported by autonomous arms that carry out scrub-nurse and circulating-nurse functions. Tool change and supply delivery are performed automatically and at least as fast as those performed manually by nurses. The TP system also includes tomographic X-ray facility for patient diagnosis and 2-D fluoroscopic data to support interventions. The vast amount of clinical protocols generated in the TP system are recorded automatically. These capabilities form the basis for a more comprehensive acute diagnostic and management platform that will provide life-saving care in environments where surgical personnel are not present.« less
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
Low-Cost Space Hardware and Software
NASA Technical Reports Server (NTRS)
Shea, Bradley Franklin
2013-01-01
The goal of this project is to demonstrate and support the overall vision of NASA's Rocket University (RocketU) through the design of an electrical power system (EPS) monitor for implementation on RUBICS (Rocket University Broad Initiatives CubeSat), through the support for the CHREC (Center for High-Performance Reconfigurable Computing) Space Processor, and through FPGA (Field Programmable Gate Array) design. RocketU will continue to provide low-cost innovations even with continuous cuts to the budget.
Mapping Parameterized Dataflow Graphs onto FPGA Platforms (Preprint)
2014-02-01
Shen , Nimish Sane, William Plishker, Shuvra S. Bhattacharyya (University of Maryland) Hojin Kee (National Instruments) 5d. PROJECT NUMBER T2MC 5e...Rodyushkin, A. Ku - ranov, and V. Eruhimov. Computer vision workload analysis: Case study of video surveillance systems. Intel Technology Journal, 9, 2005...Prototyping, pages 1–7, Fairfax, Virginia, June 2010. [56] H. Wu, C. Shen , S. S. Bhattacharyya, K. Compton, M. Schulte, M. Wolf, and T. Zhang. Design and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Bari
SoundVision held a post-workshop teleconference for our 2011 graduates (as we have done for all participants) to consolidate what they'd learned during the workshop. To maximize the Science Literacy Project's impact after it ends, we strengthened and reinforced our alumni's vibrant networking infrastructure so they can continue to connect and support each other, and updated our archive system to ensure all of our science and science journalism resources and presentations will be easy to access and use over time.
Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance
NASA Technical Reports Server (NTRS)
Jones, Brandon M.
2005-01-01
Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.
The NASA Constellation University Institutes Project: Thrust Chamber Assembly Virtual Institute
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Rybak, Jeffry A.; Hulka, James R.; Jones, Gregg W.; Nesman, Tomas; West, Jeffrey S.
2006-01-01
This paper documents key aspects of the Constellation University Institutes Project (CUIP) Thrust Chamber Assembly (TCA) Virtual Institute (VI). Specifically, the paper details the TCA VI organizational and functional aspects relative to providing support for Constellation Systems. The TCA VI vision is put forth and discussed in detail. The vision provides the objective and approach for improving thrust chamber assembly design methodologies by replacing the current empirical tools with verified and validated CFD codes. The vision also sets out ignition, performance, thermal environments and combustion stability as focus areas where application of these improved tools is required. Flow physics and a study of the Space Shuttle Main Engine development program are used to conclude that the injector is the key to robust TCA design. Requirements are set out in terms of fidelity, robustness and demonstrated accuracy of the design tool. Lack of demonstrated accuracy is noted as the most significant obstacle to realizing the potential of CFD to be widely used as an injector design tool. A hierarchical decomposition process is outlined to facilitate the validation process. A simulation readiness level tool used to gauge progress toward the goal is described. Finally, there is a description of the current efforts in each focus area. The background of each focus area is discussed. The state of the art in each focus area is noted along with the TCA VI research focus in the area. Brief highlights of work in the area are also included.
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Development of Inflatable Entry Systems Technologies
NASA Technical Reports Server (NTRS)
Player, Charles J.; Cheatwood, F. McNeil; Corliss, James
2005-01-01
Achieving the objectives of NASA s Vision for Space Exploration will require the development of new technologies, which will in turn require higher fidelity modeling and analysis techniques, and innovative testing capabilities. Development of entry systems technologies can be especially difficult due to the lack of facilities and resources available to test these new technologies in mission relevant environments. This paper discusses the technology development process to bring inflatable aeroshell technology from Technology Readiness Level 2 (TRL-2) to TRL-7. This paper focuses mainly on two projects: Inflatable Reentry Vehicle Experiment (IRVE), and Inflatable Aeroshell and Thermal Protection System Development (IATD). The objectives of IRVE are to conduct an inflatable aeroshell flight test that demonstrates exoatmospheric deployment and inflation, reentry survivability and stability, and predictable drag performance. IATD will continue the development of the technology by conducting exploration specific trade studies and feeding forward those results into three more flight tests. Through an examination of these projects, and other potential projects, this paper discusses some of the risks, issues, and unexpected benefits associated with the development of inflatable entry systems technology.
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Flight Test Evaluation of Synthetic Vision Concepts at a Terrain Challenged Airport
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prince, Lawrence J., III; Bailey, Randell E.; Arthur, Jarvis J., III; Parrish, Russell V.
2004-01-01
NASA's Synthetic Vision Systems (SVS) Project is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation/Terrain Awareness and Warning System displays. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the tunnel guidance display concept used within the SVS concepts achieved required navigation performance (RNP) criteria.
Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David
2018-06-01
The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.
2020 Vision: The EICCD Moves into the 21st Century.
ERIC Educational Resources Information Center
Blong, John T.; Friedel, Janice N.
In 1989, the Eastern Iowa Community College District (EICCD) undertook a project to develop a collective image of what the community college should be in the coming century. The reasons for seeking this "shared vision" were to create institutional focus, foster commitment, build communication, and reaffirm the college's mission and…
The Influence of Attentional Focus Instructions and Vision on Jump Height Performance
ERIC Educational Resources Information Center
Abdollahipour, Reza; Psotta, Rudolf; Land, William M.
2016-01-01
Purpose: Studies have suggested that the use of visual information may underlie the benefit associated with an external focus of attention. Recent studies exploring this connection have primarily relied on motor tasks that involve manipulation of an object (object projection). The present study examined whether vision influences the effect of…
The Mission Project: Building a Nation of Learners by Advancing America's Community Colleges.
ERIC Educational Resources Information Center
American Association of Community Colleges, Washington, DC.
This document describes the American Association of Community Colleges (AACC), its new mission and vision statements, and a recommended set of strategic action areas deemed essential to creating the future described in the mission and vision statements. The proposed AACC mission statement reads: "building a nation of learners by advancing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, K.W.; Scott, K.P.
2000-11-01
Since the 2020 Vision project began in 1996, students from participating schools have completed and submitted a variety of scenarios describing potential world and regional conditions in the year 2020 and their possible effect on US national security. This report summarizes the students' views and describes trends observed over the course of the 2020 Vision project's five years. It also highlights the main organizational features of the project. An analysis of thematic trends among the scenarios showed interesting shifts in students' thinking, particularly in their views of computer technology, US relations with China, and globalization. In 1996, most students perceivedmore » computer technology as highly beneficial to society, but as the year 2000 approached, this technology was viewed with fear and suspicion, even personified as a malicious, uncontrollable being. Yet, after New Year's passed with little disruption, students generally again perceived computer technology as beneficial. Also in 1996, students tended to see US relations with China as potentially positive, with economic interaction proving favorable to both countries. By 2000, this view had transformed into a perception of China emerging as the US' main rival and ''enemy'' in the global geopolitical realm. Regarding globalization, students in the first two years of the project tended to perceive world events as dependent on US action. However, by the end of the project, they saw the US as having little control over world events and therefore, we Americans would need to cooperate and compromise with other nations in order to maintain our own well-being.« less
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Three-dimensional ocular kinematics underlying binocular single vision
Misslisch, H.
2016-01-01
We have analyzed the binocular coordination of the eyes during far-to-near refixation saccades based on the evaluation of distance ratios and angular directions of the projected target images relative to the eyes' rotation centers. By defining the geometric point of binocular single vision, called Helmholtz point, we found that disparities during fixations of targets at near distances were limited in the subject's three-dimensional visual field to the vertical and forward directions. These disparities collapsed to simple vertical disparities in the projective binocular image plane. Subjects were able to perfectly fuse the vertically disparate target images with respect to the projected Helmholtz point of single binocular vision, independent of the particular location relative to the horizontal plane of regard. Target image fusion was achieved by binocular torsion combined with corrective modulations of the differential half-vergence angles of the eyes in the horizontal plane. Our findings support the notion that oculomotor control combines vergence in the horizontal plane of regard with active torsion in the frontal plane to achieve fusion of the dichoptic binocular target images. PMID:27655969
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.
2016-01-01
Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III
2005-01-01
Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.
Dynamic systems and the role of evaluation: The case of the Green Communities project.
Anzoise, Valentina; Sardo, Stefania
2016-02-01
The crucial role evaluation can play in the co-development of project design and its implementation will be addressed through the analysis of a case study, the Green Communities (GC) project, funded by the Italian Ministry of Environment within the EU Interregional Operational Program (2007-2013) "Renewable Energy and Energy Efficiency". The project's broader goals included an attempt to trigger a change in Italian local development strategies, especially for mountain and inland areas, which would be tailored to the real needs of communities, and based on a sustainable exploitation and management of the territorial assets. The goal was not achieved, and this paper addresses the issues of how GC could have been more effective in fostering a vision of change, and which design adaptations and evaluation procedures would have allowed the project to better cope with the unexpected consequences and resistances it encountered. The conclusions drawn are that projects should be conceived, designed and carried out as dynamic systems, inclusive of a dynamic and engaged evaluation enabling the generation of feedbacks loops, iteratively interpreting the narratives and dynamics unfolding within the project, and actively monitoring the potential of various relationships among project participants for generating positive social change. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Miller, Thomas
2007-01-01
The NASA Glenn Research Center (GRC), along with the Goddard Space Flight Center (GSFC), Jet Propulsion Laboratory (JPL), Johnson Space Center (JSC), Marshall Space Flight Center (MSFC), and industry partners, is leading a space-rated lithium-ion advanced development battery effort to support the vision for Exploration. This effort addresses the lithium-ion battery portion of the Energy Storage Project under the Exploration Technology Development Program. Key discussions focus on the lithium-ion cell component development activities, a common lithium-ion battery module, test and demonstration of charge/discharge cycle life performance and safety characterization. A review of the space-rated lithium-ion battery project will be presented highlighting the technical accomplishments during the past year.
NASA Astrophysics Data System (ADS)
Cross, Jack; Schneider, John; Cariani, Pete
2013-05-01
Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.
Evaluation of 5 different labeled polymer immunohistochemical detection systems.
Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A
2010-01-01
Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
The Revolutionary Vertical Lift Technology (RVLT) Project
NASA Technical Reports Server (NTRS)
Yamauchi, Gloria K.
2018-01-01
The Revolutionary Vertical Lift Technology (RVLT) Project is one of six projects in the Advanced Air Vehicles Program (AAVP) of the NASA Aeronautics Research Mission Directorate. The overarching goal of the RVLT Project is to develop and validate tools, technologies, and concepts to overcome key barriers for vertical lift vehicles. The project vision is to enable the next generation of vertical lift vehicles with aggressive goals for efficiency, noise, and emissions, to expand current capabilities and develop new commercial markets. The RVLT Project invests in technologies that support conventional, non-conventional, and emerging vertical-lift aircraft in the very light to heavy vehicle classes. Research areas include acoustic, aeromechanics, drive systems, engines, icing, hybrid-electric systems, impact dynamics, experimental techniques, computational methods, and conceptual design. The project research is executed at NASA Ames, Glenn, and Langley Research Centers; the research extensively leverages partnerships with the US Army, the Federal Aviation Administration, industry, and academia. The primary facilities used by the project for testing of vertical-lift technologies include the 14- by 22-Ft Wind Tunnel, Icing Research Tunnel, National Full-Scale Aerodynamics Complex, 7- by 10-Ft Wind Tunnel, Rotor Test Cell, Landing and Impact Research facility, Compressor Test Facility, Drive System Test Facilities, Transonic Turbine Blade Cascade Facility, Vertical Motion Simulator, Mobile Acoustic Facility, Exterior Effects Synthesis and Simulation Lab, and the NASA Advanced Supercomputing Complex. To learn more about the RVLT Project, please stop by booth #1004 or visit their website at https://www.nasa.gov/aeroresearch/programs/aavp/rvlt.
Advocating mindset for cooperative partnership for better future of construction industry
NASA Astrophysics Data System (ADS)
Omar, Datuk Wahid
2017-11-01
Construction industry players are known for their low acceptance on the changes. Hence, it is identified that the biggest challenge in the industry is changing the mindset. This paper highlights the importance of transformation in shaping for better future of the industry. Transformation favors innovation and progressive development in the industry and specifically in managing a project. Thus changes in mindset of players with an eye to the future and focus on what is coming are paramount in inculcating the transformation culture in construction eco-system. The key to the success of transformation is the collaborative and cooperative partnering which ensuring the performance of every stage of project delivery. The collaborative, cooperative and concerted effort of all parties involved in the project creates mutual understanding on mission and vision of project. Adopting healthy and harmonious project culture, implementing innovative procurement that emphasis on fair risk sharing should be a working culture. This cooperative partnership should be the future of the project undertaking in the construction industry.
The MSFC Systems Engineering Guide: An Overview and Plan
NASA Technical Reports Server (NTRS)
Shelby, Jerry; Thomas, L. Dale
2007-01-01
This paper describes the guiding vision, progress to date and the plan forward for development of the Marshall Space Flight Center (MSFC) Systems Engineering Guide (SEG), a virtual systems engineering handbook and archive that describes the system engineering processes used by MSFC in the development of ongoing complex space systems such as the Ares launch vehicle and forthcoming ones as well. It is the intent of this website to be a "One Stop Shop' for MSFC systems engineers that will provide tutorial information, an overview of processes and procedures and links to assist system engineering with guidance and references, and provide an archive of relevant systems engineering artifacts produced by the many NASA projects developed and managed by MSFC over the years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doak, J. E.; Prasad, Lakshman
2002-01-01
This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, andmore » (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-03-01
This is a four-part Wind Vision project, consisting of Wind Vision Highlights, Executive Summary, a Full Report, and Appendix. The U.S. Department of Energy (DOE) Wind Program, in close cooperation with the wind industry, led a comprehensive analysis to evaluate future pathways for the wind industry. The Wind Vision report updates and expands upon the DOE's 2008 report, 20% Wind Energy by 2030, and defines the societal, environmental, and economic benefits of wind power in a scenario with wind energy supplying 10% of national end-use electricity demand by 2020, 20% by 2030, and 35% by 2050.
A high resolution and high speed 3D imaging system and its application on ATR
NASA Astrophysics Data System (ADS)
Lu, Thomas T.; Chao, Tien-Hsin
2006-04-01
The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.
2010-12-01
including thermal optics Much more precise target engagement and stabilization method Drawbacks Mechanical malfunctions more common Gunner has...complete panorama view that extends from 0–180 degrees off-center, from our camera system. Figure 20 360° view dome projection Figure 21 shows the...method can incorporate various types of synthetic vision aids, such as thermal or electro-optical sensors, to give the user the capability to see in
General Mission Analysis Tool (GMAT): Mission, Vision, and Business Case
NASA Technical Reports Server (NTRS)
Hughes, Steven P.
2007-01-01
The Goal of the GMAT project is to develop new space trajectory optimization and mission design technology by working inclusively with ordinary people, universities businesses and other government organizations; and to share that technology in an open and unhindered way. GMAT's a free and open source software system; free for anyone to use in development of new mission concepts or to improve current missions, freely available in source code form for enhancement or future technology development.
A Survey of Research Projects in Schools and Colleges of Optometry.
ERIC Educational Resources Information Center
Whitener, John C.
1981-01-01
A survey undertaken by the American Optometric Association reveals research projects, investigators, and in some cases, funding sources for research in the areas of low vision, ophthalmic lenses, pharmacology, anatomy and pathology, and sensory and motor functions. A total of 205 projects are charted. (MSE)
Prison Literacy Project Handbook.
ERIC Educational Resources Information Center
Kops, Joan, Ed.
This handbook records the creation, development and growth, and stumbling blocks and successes of the Prison Literacy Project (PLP). It is intended to serve as a model for other community groups that are developing their own literacy projects. The handbook provides a history and philosophy of PLP, states PLP's vision and purpose, discusses need,…
Umble, K; Bain, B; Ruddock-Small, M; Mahanna, E; Baker, E L
2012-07-01
Leadership development is a strategy for improving national responses to HIV/AIDS. The University of the West Indies offers the Caribbean Health Leadership Institute (CHLI) to enhance leaders' effectiveness and responses to HIV/AIDS through a cooperative agreement with the Centers for Disease Control and Prevention. CHLI enrolls leaders in annual cohorts numbering 20-40. To examine how CHLI influenced graduates' self-understanding, skills, approaches, vision, commitments, courage, confidence, networks, and contributions to program, organizational, policy, and systems improvements. Web-based surveys and interviews of graduates. CHLI increased graduates' self-understanding and skills and strengthened many graduates' vision, confidence, and commitments to improving systems. It helped graduates improve programs, policies, and systems by: motivating them and giving them ideas for changes to pursue, encouraging them to share their vision, deepening skills in areas such as systems thinking, policy advocacy, and communication, strengthening their inclusion of partners and team members, and influencing how they interacted with others. Training both HIV-focused and general health leaders can help both kinds of leaders foster improvements in HIV services and policies. Learners greatly valued self-assessments, highly interactive sessions, and the opportunity to build a network of professional colleagues. Projects provided opportunities to address substantive issues and immediately apply learning to work. Leadership development evaluations in the United States have also emphasized the complementary benefits of assessment and feedback, skills development, and network development. Global leadership programs should find ways to combine these components in both traditional face-to-face and distance-learning contexts.
Parton, Becky Sue
2006-01-01
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.
Paré, Guy; Sicotte, Claude; Poba-Nzaou, Placide; Balouzakis, George
2011-02-28
The adoption and diffusion of clinical information systems has become one of the critical benchmarks for achieving several healthcare organizational reform priorities, including home care, primary care, and integrated care networks. However, these systems are often strongly resisted by the same community that is expected to benefit from their use. Prior research has found that early perceptions and beliefs play a central role in shaping future attitudes and behaviors such as negative rumors, lack of involvement, and resistance to change. In this line of research, this paper builds on the change management and information systems literature and identifies variables associated with clinicians' early perceptions of organizational readiness for change in the specific context of clinical information system projects. Two cross-sectional surveys were conducted to test our research model. First, a questionnaire was pretested and then distributed to the future users of a mobile computing technology in 11 home care organizations. The second study took place in a large teaching hospital that had approved a budget for the acquisition of an electronic medical records system. Data analysis was performed using partial least squares. Scale items used in this study showed adequate psychometric properties. In Study 1, four of the hypothesized links in the research model were supported, with change appropriateness, organizational flexibility, vision clarity, and change efficacy explaining 75% of the variance in organizational readiness. In Study 2, four hypotheses were also supported, two of which differed from those supported in Study 1: the presence of an effective project champion and collective self-efficacy. In addition to these variables, vision clarity and change appropriateness also helped explain 75% of the variance in the dependent variable. Explanations for the similarities and differences observed in the two surveys are provided. Organizational readiness is arguably a key factor involved in clinicians' initial support for clinical information system initiatives. As healthcare organizations continue to invest in information technologies to improve quality and continuity of care and reduce costs, understanding the factors that influence organizational readiness for change represents an important avenue for future research.
2011-01-01
Background The adoption and diffusion of clinical information systems has become one of the critical benchmarks for achieving several healthcare organizational reform priorities, including home care, primary care, and integrated care networks. However, these systems are often strongly resisted by the same community that is expected to benefit from their use. Prior research has found that early perceptions and beliefs play a central role in shaping future attitudes and behaviors such as negative rumors, lack of involvement, and resistance to change. In this line of research, this paper builds on the change management and information systems literature and identifies variables associated with clinicians' early perceptions of organizational readiness for change in the specific context of clinical information system projects. Methods Two cross-sectional surveys were conducted to test our research model. First, a questionnaire was pretested and then distributed to the future users of a mobile computing technology in 11 home care organizations. The second study took place in a large teaching hospital that had approved a budget for the acquisition of an electronic medical records system. Data analysis was performed using partial least squares. Results Scale items used in this study showed adequate psychometric properties. In Study 1, four of the hypothesized links in the research model were supported, with change appropriateness, organizational flexibility, vision clarity, and change efficacy explaining 75% of the variance in organizational readiness. In Study 2, four hypotheses were also supported, two of which differed from those supported in Study 1: the presence of an effective project champion and collective self-efficacy. In addition to these variables, vision clarity and change appropriateness also helped explain 75% of the variance in the dependent variable. Explanations for the similarities and differences observed in the two surveys are provided. Conclusions Organizational readiness is arguably a key factor involved in clinicians' initial support for clinical information system initiatives. As healthcare organizations continue to invest in information technologies to improve quality and continuity of care and reduce costs, understanding the factors that influence organizational readiness for change represents an important avenue for future research. PMID:21356080
Re-Engineering Complex Legacy Systems at NASA
NASA Technical Reports Server (NTRS)
Ruszkowski, James; Meshkat, Leila
2010-01-01
The Flight Production Process (FPP) Re-engineering project has established a Model-Based Systems Engineering (MBSE) methodology and the technological infrastructure for the design and development of a reference, product-line architecture as well as an integrated workflow model for the Mission Operations System (MOS) for human space exploration missions at NASA Johnson Space Center. The design and architectural artifacts have been developed based on the expertise and knowledge of numerous Subject Matter Experts (SMEs). The technological infrastructure developed by the FPP Re-engineering project has enabled the structured collection and integration of this knowledge and further provides simulation and analysis capabilities for optimization purposes. A key strength of this strategy has been the judicious combination of COTS products with custom coding. The lean management approach that has led to the success of this project is based on having a strong vision for the whole lifecycle of the project and its progress over time, a goal-based design and development approach, a small team of highly specialized people in areas that are critical to the project, and an interactive approach for infusing new technologies into existing processes. This project, which has had a relatively small amount of funding, is on the cutting edge with respect to the utilization of model-based design and systems engineering. An overarching challenge that was overcome by this project was to convince upper management of the needs and merits of giving up more conventional design methodologies (such as paper-based documents and unwieldy and unstructured flow diagrams and schedules) in favor of advanced model-based systems engineering approaches.
Strategic Research Directions In Microgravity Materials Science
NASA Technical Reports Server (NTRS)
Clinton, Raymond G., Jr.; Wargo, Michael J.; Marzwell, Neville L.; Sanders, Gerald; Schlagheck, Ron; Semmes, Ed; Bassler, Julie; Cook, Beth
2004-01-01
The Office of Biological and Physical Research (OBPR) is moving aggressively to align programs, projects, and products with the vision for space exploration. Research in advanced materials is a critical element in meeting exploration goals. Research in low gravity materials science in OBPR is being focused on top priority needs in support of exploration: 1) Space Radiation Shielding; 2) In Situ Resource Utilization; 3) In Situ Fabrication and Repair; 4) Materials Science for Spacecraft and Propulsion Systems; 5) Materials Science for Advanced Life Support Systems. Roles and responsibilities in low gravity materials research for exploration between OBPR and the Office of Exploration Systems are evolving.
Vision-based obstacle recognition system for automated lawn mower robot development
NASA Astrophysics Data System (ADS)
Mohd Zin, Zalhan; Ibrahim, Ratnawati
2011-06-01
Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.
Trauma Pod: a semi-automated telerobotic surgical system.
Garcia, Pablo; Rosen, Jacob; Kapoor, Chetan; Noakes, Mark; Elbert, Greg; Treat, Michael; Ganous, Tim; Hanson, Matt; Manak, Joe; Hasser, Chris; Rohler, David; Satava, Richard
2009-06-01
The Trauma Pod (TP) vision is to develop a rapidly deployable robotic system to perform critical acute stabilization and/or surgical procedures, autonomously or in a teleoperative mode, on wounded soldiers in the battlefield who might otherwise die before treatment in a combat hospital could be provided. In the first phase of a project pursuing this vision, a robotic TP system was developed and its capability demonstrated by performing selected surgical procedures on a patient phantom. The system demonstrates the feasibility of performing acute stabilization procedures with the patient being the only human in the surgical cell. The teleoperated surgical robot is supported by autonomous robotic arms and subsystems that carry out scrub-nurse and circulating-nurse functions. Tool change and supply delivery are performed automatically and at least as fast as performed manually by nurses. Tracking and counting of the supplies is performed automatically. The TP system also includes a tomographic X-ray facility for patient diagnosis and two-dimensional (2D) fluoroscopic data to support interventions. The vast amount of clinical protocols generated in the TP system are recorded automatically. Automation and teleoperation capabilities form the basis for a more comprehensive acute diagnostic and management platform that will provide life-saving care in environments where surgical personnel are not present.
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Eco-logical successes : second edition, January 2012
DOT National Transportation Integrated Search
2012-01-01
In 2006, leaders from eight Federal agencies signed the interagency document EcoLogical: An Ecosystem Approach to Developing Infrastructure Projects. Eco-Logical is a document that outlines a shared vision of how to develop infrastructure projects in...
Stead, William W.; Miller, Randolph A.; Musen, Mark A.; Hersh, William R.
2000-01-01
The vision of integrating information—from a variety of sources, into the way people work, to improve decisions and process—is one of the cornerstones of biomedical informatics. Thoughts on how this vision might be realized have evolved as improvements in information and communication technologies, together with discoveries in biomedical informatics, and have changed the art of the possible. This review identified three distinct generations of “integration” projects. First-generation projects create a database and use it for multiple purposes. Second-generation projects integrate by bringing information from various sources together through enterprise information architecture. Third-generation projects inter-relate disparate but accessible information sources to provide the appearance of integration. The review suggests that the ideas developed in the earlier generations have not been supplanted by ideas from subsequent generations. Instead, the ideas represent a continuum of progress along the three dimensions of workflow, structure, and extraction. PMID:10730596
Monovision techniques for telerobots
NASA Technical Reports Server (NTRS)
Goode, P. W.; Carnils, K.
1987-01-01
The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.
Yoshida, Karen K; Parnes, Penny; Brooks, Dina; Cameron, Deb
2009-01-01
The purpose of this article is to describe the changing nature, process and structure of an international non-governmental organisation (NGO) in response to internal and external factors. This article is based on the interview data collected for the study which focussed on the experiences and perception of key informants on trends related to official development assistance, local governments' perspective of the NGO as a development partner and the NGO's perception of corporate and foundation support. Qualitative descriptive data analysis was used. Three main themes were developed with the interview data. Our analysis indicated shifts in the: (1) vision/nature (single to cross disability focus), (2) structure (building internal and external relationships) and (3) process (from ad hoc to systemic evaluations). These broader issues of vision, structure (relationships) and evaluation within and outside of the organisation, needs to be addressed to provide a foundation upon which funding initiatives can be developed. A closer relationship between funders and projects/programmes would do much to enhance the partnership and would ensure that the projects are able to measure and report results in a manner that is conducive to increasing support.
Design of a dynamic test platform for autonomous robot vision systems
NASA Technical Reports Server (NTRS)
Rich, G. C.
1980-01-01
The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.
The Ontology of Vision. The Invisible, Consciousness of Living Matter
Fiorio, Giorgia
2016-01-01
If I close my eyes, the absence of light activates the peripheral cells devoted to the perception of darkness. The awareness of “seeing oneself seeing” is in its essence a thought, one that is internal to the vision and previous to any object of sight. To this amphibious faculty, the “diaphanous color of darkness,” Aristotle assigns the principle of knowledge. “Vision is a whole perceptual system, not a channel of sense.” Functions of vision are interwoven with the texture of human interaction within a terrestrial environment that is in turn contained into the cosmic order. A transitive host within the resonance of an inner-outer environment, the human being is the contact-term between two orders of scale, both bigger and smaller than the individual unity. In the perceptual integrative system of human vision, the convergence-divergence of the corporeal presence and the diffraction of its own appearance is the margin. The sensation of being no longer coincides with the breath of life, it does not seems “real” without the trace of some visible evidence and its simultaneous “sharing”. Without a shadow, without an imprint, the numeric copia of the physical presence inhabits the transient memory of our electronic prostheses. A rudimentary “visuality” replaces tangible experience dissipating its meaning and the awareness of being alive. Transversal to the civilizations of the ancient world, through different orders of function and status, the anthropomorphic “figuration” of archaic sculpture addressees the margin between Being and Non-Being. Statuary human archetypes are not meant to be visible, but to exist as vehicles of transcendence to outlive the definition of human space-time. The awareness of individual finiteness seals the compulsion to “give body” to an invisible apparition shaping the figuration of an ontogenetic expression of human consciousness. Subject and object, the term “humanum” fathoms the relationship between matter and its living dimension, “this de facto vision and the ‘there is’ which it contains.” The project reconsiders the dialectic between the terms vision–presence in the contemporary perception of archaic human statuary according to the transcendent meaning of its immaterial legacy. PMID:27014106
Utilization of the Space Vision System as an Augmented Reality System For Mission Operations
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles
2003-01-01
Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.
ALHAT COBALT: CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M.
2015-01-01
The COBALT project is a flight demonstration of two NASA ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) capabilities that are key for future robotic or human landing GN&C (Guidance, Navigation and Control) systems. The COBALT payload integrates the Navigation Doppler Lidar (NDL) for ultraprecise velocity and range measurements with the Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. Terrestrial flight tests of the COBALT payload in an open-loop and closed-loop GN&C configuration will be conducted onboard a commercial, rocket-propulsive Vertical Test Bed (VTB) at a test range in Mojave, CA.
NASA Technical Reports Server (NTRS)
Cheatham, John B., Jr.; Magee, Kevin N.
1991-01-01
The Rice University Department of Mechanical Engineering and Materials Sciences' Robotics Group designed and built an eight degree of freedom redundant manipulator. Fuzzy logic was proposed as a control scheme for tasks not directly controlled by a human operator. In preliminary work, fuzzy logic control was implemented for a camera tracking system and a six degree of freedom manipulator. Both preliminary systems use real time vision data as input to fuzzy controllers. Related projects include integration of tactile sensing and fuzzy control of a redundant snake-like arm that is under construction.
Interactive target tracking for persistent wide-area surveillance
NASA Astrophysics Data System (ADS)
Ersoy, Ilker; Palaniappan, Kannappan; Seetharaman, Guna S.; Rao, Raghuveer M.
2012-06-01
Persistent aerial surveillance is an emerging technology that can provide continuous, wide-area coverage from an aircraft-based multiple-camera system. Tracking targets in these data sets is challenging for vision algorithms due to large data (several terabytes), very low frame rate, changing viewpoint, strong parallax and other imperfections due to registration and projection. Providing an interactive system for automated target tracking also has additional challenges that require online algorithms that are seamlessly integrated with interactive visualization tools to assist the user. We developed an algorithm that overcomes these challenges and demonstrated it on data obtained from a wide-area imaging platform.
Instruments and Methodologies for the Underwater Tridimensional Digitization and Data Musealization
NASA Astrophysics Data System (ADS)
Repola, L.; Memmolo, R.; Signoretti, D.
2015-04-01
In the research started within the SINAPSIS project of the Università degli Studi Suor Orsola Benincasa an underwater stereoscopic scanning aimed at surveying of submerged archaeological sites, integrable to standard systems for geomorphological detection of the coast, has been developed. The project involves the construction of hardware consisting of an aluminum frame supporting a pair of GoPro Hero Black Edition cameras and software for the production of point clouds and the initial processing of data. The software has features for stereoscopic vision system calibration, reduction of noise and the of distortion of underwater captured images, searching for corresponding points of stereoscopic images using stereo-matching algorithms (dense and sparse), for points cloud generating and filtering. Only after various calibration and survey tests carried out during the excavations envisaged in the project, the mastery of methods for an efficient acquisition of data has been achieved. The current development of the system has allowed generation of portions of digital models of real submerged scenes. A semi-automatic procedure for global registration of partial models is under development as a useful aid for the study and musealization of sites.
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth
2016-01-01
Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.
Loschek, Laura F; La Fortezza, Marco; Friedrich, Anja B; Blais, Catherine-Marie; Üçpunar, Habibe K; Yépez, Vicente A; Lehmann, Martin; Gompel, Nicolas; Gagneur, Julien; Sigrist, Stephan J
2018-01-01
Loss of the sense of smell is among the first signs of natural aging and neurodegenerative diseases such as Alzheimer’s and Parkinson’s. Cellular and molecular mechanisms promoting this smell loss are not understood. Here, we show that Drosophila melanogaster also loses olfaction before vision with age. Within the olfactory circuit, cholinergic projection neurons show a reduced odor response accompanied by a defect in axonal integrity and reduction in synaptic marker proteins. Using behavioral functional screening, we pinpoint that expression of the mitochondrial reactive oxygen scavenger SOD2 in cholinergic projection neurons is necessary and sufficient to prevent smell degeneration in aging flies. Together, our data suggest that oxidative stress induced axonal degeneration in a single class of neurons drives the functional decline of an entire neural network and the behavior it controls. Given the important role of the cholinergic system in neurodegeneration, the fly olfactory system could be a useful model for the identification of drug targets. PMID:29345616
Interactive Therapeutic Multi-sensory Environment for Cerebral Palsy People
NASA Astrophysics Data System (ADS)
Mauri, Cesar; Solanas, Agusti; Granollers, Toni; Bagés, Joan; García, Mabel
The Interactive Therapeutic Sensory Environment (ITSE) research project offers new opportunities on stimulation, interaction and interactive creation for people with moderate and severe mental and physical disabilities. Mainly based on computer vision techniques, the ITSE project allows the gathering of users’ gestures and their transformation into images, sounds and vibrations. Currently, in the APPC, we are working in a prototype that is capable of generating sounds based on the users’ motion and to process digitally the vocal sounds of the users. Tests with impaired users show that ITSE promotes participation, engagement and play. In this paper, we briefly describe the ITSE system, the experimental methodology, the preliminary results and some future goals.
Translator Plan: A Coordinated Vision for Fiscal Years 2018-2020
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riihimaki, Laura; Comstock, Jennifer; Collis, Scott
In June of 2017, the Translator Group met to develop this coordinated three-year vision plan, incorporating key feedback and aligning to ARM’s mission priorities. This plan responds to a shift in how we determine our priorities, given the new needs of the ARM Facility. In the past, individual Translators have determined priorities in conversation with individual DOE Atmospheric System Research (ASR) working groups. To better support ARM’s Decadal Vision (https://www.arm.gov/publications/programdocs/doe-sc-arm-14-029.pdf), however, the Translator Group is instead developing a coordinated response to needs from our user community to better balance resources and skills among participants. This approach agrees with direction frommore » ARM leadership and the ARM-ASR Coordination Team (AACT). To develop this plan the Translator Group reviewed feedback received from the User Executive Committee (UEC) and the Triennial Review, as well as priorities from ASR working groups and Principal Investigators (PIs), the LES ARM Symbiotic Simulation and Observation (LASSO) project, and new instrumentation and activities as described by the ARM Technical Director. In particular, we are responding to the advice that we were trying to do too much, and should focus on providing additional support to data quality, uncertainty assessment, a timeline for producing core VAPs from ARM Mobile Facility (AMF) campaigns, and supporting key aspects of the Decadal Vision.« less
NASA Astrophysics Data System (ADS)
Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.
2008-04-01
The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set contains many views of the same vehicle in different contexts and situations simulating operational scenarios.
Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement
Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.
2017-01-01
Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975
Project nurse manager: an intrapreneurial role.
Risner, P B; Anderson, M L
1994-01-01
Nurse intrapreneurs are the key to innovation and cost-effective health care in the 1990s. A project nurse manager, acting as a liaison between service departments, can provide the vision and insight for the successful outcome of such projects as product evaluation, unit renovation, and the development of a new facility. The role, benefits, and outcomes of one project nurse manager are described.
Active solution of homography for pavement crack recovery with four laser lines.
Xu, Guan; Chen, Fang; Wu, Guangwei; Li, Xiaotao
2018-05-08
An active solution method of the homography, which is derived from four laser lines, is proposed to recover the pavement cracks captured by the camera to the real-dimension cracks in the pavement plane. The measurement system, including a camera and four laser projectors, captures the projection laser points on the 2D reference in different positions. The projection laser points are reconstructed in the camera coordinate system. Then, the laser lines are initialized and optimized by the projection laser points. Moreover, the plane-indicated Plücker matrices of the optimized laser lines are employed to model the laser projection points of the laser lines on the pavement. The image-pavement homography is actively determined by the solutions of the perpendicular feet of the projection laser points. The pavement cracks are recovered by the active solution of homography in the experiments. The recovery accuracy of the active solution method is verified by the 2D dimension-known reference. The test case with the measurement distance of 700 mm and the relative angle of 8° achieves the smallest recovery error of 0.78 mm in the experimental investigations, which indicates the application potentials in the vision-based pavement inspection.
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
ERIC Educational Resources Information Center
Andrews, Gillian
2015-01-01
Possibilities for a different form of education have provided rich sources of inspiration for science fiction writers. Isaac Asimov, Orson Scott Card, Neal Stephenson, Octavia Butler, and Vernor Vinge, among others, have all projected their own visions of what education could be. These visions sometimes engage with technologies that are currently…
ERIC Educational Resources Information Center
Harris, Christopher J.; Penuel, William R.; D'Angelo, Cynthia M.; DeBarger, Angela Haydel; Gallagher, Lawrence P.; Kennedy, Cathleen A.; Cheng, Britte Haugen; Krajcik, Joseph S.
2015-01-01
The "Framework for K-12 Science Education" (National Research Council, 2012) sets an ambitious vision for science learning by emphasizing that for students to achieve proficiency in science they will need to participate in the authentic practices of scientists. To realize this vision, all students will need opportunities to learn from…
The Amazon Region; A Vision of Sovereignty
1998-04-06
and SPOT remote sensing satellites images, about 90% of the Amazon jungle remains almost untouched9. This 280 million hectares of vegetation hold...increasing energy needs, remain unanswered. Indian rights Has the Indian population been jeopardized by the development of the Amazon Region...or government agency. STRATEGY RESEARCH PROJECT THE AMAZON REGION; A VISION OF SOVEREIGNTY BY LIEUTENANT COLONEL EDUARDO JOSE BARBOSA
49 CFR 571.218 - Standard No. 218; Motorcycle helmets.
Code of Federal Regulations, 2013 CFR
2013-10-01
... provide peripheral vision clearance of at least 105° to each side of the mid-sagittal plane, when the... basic plane that are within the angles of peripheral vision (see Figure 3). S5.5 Projections. A helmet... including 70 percent for a minimum of 4 hours. (b) Low temperature. Expose to any temperature from 5 °F to...
49 CFR 571.218 - Standard No. 218; Motorcycle helmets.
Code of Federal Regulations, 2014 CFR
2014-10-01
... provide peripheral vision clearance of at least 105° to each side of the mid-sagittal plane, when the... basic plane that are within the angles of peripheral vision (see Figure 3). S5.5 Projections. A helmet... including 70 percent for a minimum of 4 hours. (b) Low temperature. Expose to any temperature from 5 °F to...
Ability to Read Medication Labels Improved by Participation in a Low Vision Rehabilitation Program
ERIC Educational Resources Information Center
Markowitz, Samuel N.; Kent, Christine K.; Schuchard, Ronald A.; Fletcher, Donald C.
2008-01-01
Demographic projections indicate that the population of the Western world is aging, and evidence suggests an increase in the incidence of conditions, such as age-related macular degeneration (AMD), that produce visual impairments and result in low vision (Maberley et al., 2006). It is expected that in the United States and Canada, the annual…
Teacher Training Workshop for Educators of Students Who Are Blind or Low Vision
ERIC Educational Resources Information Center
Supalo, Cary A.; Dwyer, Danielle; Eberhart, Heather L.; Bunnag, Natasha; Mallouk, Thomas E.
2009-01-01
The Independent Laboratory Access for the Blind (ILAB) project has developed a suite of speech accessible tools for students who are blind or low vision to use in secondary and postsecondary science laboratory classes. The following are illustrations of experiments designed to be used by educators to introduce them to the ILAB tools, and to…
Color machine vision in industrial process control: case limestone mine
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.; Lemstrom, Guy F.; Koskinen, Seppo
1994-11-01
An optical sorter technology has been developed to improve profitability of a mine by using color line scan machine vision technology. The new technology adapted longers the expected life time of the limestone mine and improves its efficiency. Also the project has proved that color line scan technology of today can successfully be applied to industrial use in harsh environments.
Use of Open Architecture Middleware for Autonomous Platforms
NASA Astrophysics Data System (ADS)
Naranjo, Hector; Diez, Sergio; Ferrero, Francisco
2011-08-01
Network Enabled Capabilities (NEC) is the vision for next-generation systems in the defence domain formulated by governments, the European Defence Agency (EDA) and the North Atlantic Treaty Organization (NATO). It involves the federation of military information systems, rather than just a simple interconnection, to provide each user with the "right information, right place, right time - and not too much". It defines openness, standardization and flexibility principles in military systems, likewise applicable in the civilian space applications.This paper provides the conclusions drawn from "Architecture for Embarked Middleware" (EMWARE) study, funded by the European Defence Agency (EDA).The aim of the EMWARE project was to provide the information and understanding to facilitate the adoption of informed decisions regarding the specification and implementation of Open Architecture Middleware in future distributed systems, linking it with the NEC goal.EMWARE project included the definition of four business cases, each devoted to a different field of application (Unmanned Aerial Vehicles, Helicopters, Unmanned Ground Vehicles and the Satellite Ground Segment).
Cryogenics Testbed Laboratory Flange Baseline Configuration
NASA Technical Reports Server (NTRS)
Acuna, Marie Lei Ysabel D.
2013-01-01
As an intern at Kennedy Space Center (KSC), I was involved in research for the Fluids and Propulsion Division of the NASA Engineering (NE) Directorate. I was immersed in the Integrated Ground Operations Demonstration Units (IGODU) project for the majority of my time at KSC, primarily with the Ground Operations Demonstration Unit Liquid Oxygen (GODU L02) branch of IGODU. This project was established to develop advancements in cryogenic systems as a part of KSC's Advanced Exploration Systems (AES) program. The vision of AES is to develop new approaches for human exploration, and operations in and beyond low Earth orbit. Advanced cryogenic systems are crucial to minimize the consumable losses of cryogenic propellants, develop higher performance launch vehicles, and decrease operations cost for future launch programs. During my internship, I conducted a flange torque tracking study that established a baseline configuration for the flanges in the Simulated Propellant Loading System (SPLS) at the KSC Cryogenics Test Laboratory (CTL) - the testing environment for GODU L02.
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
New directions for space solar power
NASA Astrophysics Data System (ADS)
Mankins, John C.
2009-07-01
Several of the central issues associated with the eventual realization of the vision of solar power from space for terrestrial markets resolve around the expect costs associated with the assembly, inspection, maintenance and repair of future solar power satellite (SPS) stations. In past studies (for example, NASA's "Fresh Look Study", c. 1995-1997) efforts were made to reduce both the scale and mass of large, systems-level interfaces (e.g., the power management and distribution (PMAD) system) and on-orbit fixed infrastructures through the use of modular systems strategies. These efforts have had mixed success (as reflected in the projected on-orbit mass of various systems concepts. However, the author remains convinced of the importance of modular strategies for exceptionally large space systems in eventually realizing the vision of power from space. This paper will introduce some of the key issues associated with cost-competitive space solar power in terrestrial markets. It will examine some of the relevant SPS concepts and will assess the 'pros and cons' of each in terms of space assembly, maintenance and servicing (SAMS) requirements. The paper discusses at a high level some relevant concepts and technologies that may play r role in the eventual, successful resolution of these challenges. The paper concludes with an example of the kind of novel architectural approach for space solar power that is needed.
The research on calibration methods of dual-CCD laser three-dimensional human face scanning system
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong
2013-09-01
In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.
Coal ash by-product reutilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muncy, J.; Miller, B.
1997-09-01
Potomac Electric Power Company (PEPCO) has as part of its vision and value statement that, ``We are responsible stewards of environmental and corporate resources.`` With this moral imperative in mind, a project team was charged with initiating the Coal Pile Liner Project--installing a membrane liner under the existing coal storage pile at the Morgantown Generating Station. The existing coal yard facilities were constructed prior to the current environmental regulations, and it became necessary to upgrade the storage facilities to be environmentally friendly. The project team had two objectives in this project: (1) prevent coal pile leachate from entering the groundwatermore » system; (2) test the viability of using coal ash by-products as an aggregate substitute for concrete applications. Both objectives were met, and two additional benefits were achieved as well: (1) the use of coal ash by-products as a coal liner produced significant cost savings to the project directly; (2) the use of coal ash by-products reduced plant operation and maintenance expenses.« less
Overview of the Nasa/science Mission Directorate University Student Instrument Project (usip)
NASA Astrophysics Data System (ADS)
Pierce, D. L.
2016-12-01
These are incredible times of space and Earth science discovery related to the Earth system, our Sun, the planets, and the universe. The National Aeronautics and Space Administration (NASA) Science Mission Directorate (SMD) provides authentic student-led hands-on flight research projects as a component part of the NASA's science program. The goal of the Undergraduate Student Instrument Project (USIP) is to enable student-led scientific and technology investigations, while also providing crucial hands-on training opportunities for the Nation's future researchers. SMD, working with NASA's Office of Education (OE), the Space Technology Mission Directorate (STMD) and its Centers (GSFC/WFF and AFRC), is actively advancing the vision for student flight research using NASA's suborbital and small spacecraft platforms. Recently proposed and selected USIP projects will open up opportunities for undergraduate researchers in conducting science and developing space technologies. The paper will present an overview of USIP, results of USIP-I, and the status of current USIP-II projects that NASA is sponsoring and expects to fly in the near future.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
From wheels to wings with evolutionary spiking circuits.
Floreano, Dario; Zufferey, Jean-Christophe; Nicoud, Jean-Daniel
2005-01-01
We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots.
MARVEL: A System for Recognizing World Locations with Stereo Vision
1990-05-01
12. REPORT OATE Advanced Research P rojects Agency May 1_990 1400 Wilson Blvd. IS. NUMBER OF PAGES Arlington, VA 22209 245 4 MONITORING AGENCY NAME A...in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-85- K-0124 and under Army...priori knowledge of the locations of the obstacles in the enviorment as well as the start and goal locations. In this thesis, however, I am concerned with
Reserve Component Programs, Fiscal Year 1993. Report of the Reserve Forces Policy Board
1994-01-01
the Unicorns are gathering." The Bottom-Up Review and the Board’s vision for A quote from Shakespeare’s Julius Caesar Reserve components contained...projects a shortfall of $2.8 AC & Agency 92.4% million in 1994 as a result of Defense Finance and Accounting fees and simulator procurement. The Air...Component (JSS-RC) pay system was completed by the Defense Finance and At some future time, deoxyribonucleic acid Accounting Service in July 1993, replacing
The 4-D approach to visual control of autonomous systems
NASA Technical Reports Server (NTRS)
Dickmanns, Ernst D.
1994-01-01
Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.
Machine vision systems using machine learning for industrial product inspection
NASA Astrophysics Data System (ADS)
Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony
2002-02-01
Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.
NASA Technical Reports Server (NTRS)
Abercromby, Andrew F. J.; Thaxton, Sherry S.; Onady, Elizabeth A.; Rajulu, Sudhakar L.
2006-01-01
The Science Crew Operations and Utility Testbed (SCOUT) project is focused on the development of a rover vehicle that can be utilized by two crewmembers during extra vehicular activities (EVAs) on the moon and Mars. The current SCOUT vehicle can transport two suited astronauts riding in open cockpit seats. Among the aspects currently being developed is the cockpit design and layout. This process includes the identification of possible locations for a socket to which a crewmember could connect a portable life support system (PLSS) for recharging power, air, and cooling while seated in the vehicle. The spaces in which controls and connectors may be situated within the vehicle are constrained by the reach and vision capabilities of the suited crewmembers. Accordingly, quantification of the volumes within which suited crewmembers can both see and reach relative to the vehicle represents important information during the design process.
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Isik-Ercan, Zeynep; Kim, Beomjin; Nowak, Jeffrey
This research-in-progress hypothesizes that urban second graders can have an early understanding about the shape of Sun, Moon, and Earth, how day and night happens, and how Moon appears to change its shape by using three dimensional stereoscopic vision. The 3D stereoscopic vision system might be an effective way to teach subjects like astronomy that explains relationships among objects in space. Currently, Indiana state standards for science teaching do not suggest the teaching of these astronomical concepts explicitly before fourth grade. Yet, we expect our findings to indicate that students can learn these concepts earlier in their educational lives with the implementation of such technologies. We also project that these technologies could revolutionize when these concepts could be taught to children and expand the ways we think about children's cognitive capacities in understanding scientific concepts.
Transit safety retrofit package development : final report.
DOT National Transportation Integrated Search
2014-07-01
This report provides a summary of the Transit Safety Retrofit Package (TRP) Development project and its results. The report documents results of each project phase, and provides recommended next steps as well as a vision for a next generation TRP. Th...
NASA Technical Reports Server (NTRS)
Kopardekar, Parimal H.
2010-01-01
This document describes the FY2010 plan for the management and execution of the Next Generation Air Transportation System (NextGen) Concepts and Technology Development (CTD) Project. The document was developed in response to guidance from the Airspace Systems Program (ASP), as approved by the Associate Administrator of the Aeronautics Research Mission Directorate (ARMD), and from guidelines in the Airspace Systems Program Plan. Congress established the multi-agency Joint Planning and Development Office (JPDO) in 2003 to develop a vision for the 2025 Next Generation Air Transportation System (NextGen) and to define the research required to enable it. NASA is one of seven agency partners contributing to the effort. Accordingly, NASA's ARMD realigned the Airspace Systems Program in 2007 to "directly address the fundamental research needs of the Next Generation Air Transportation System...in partnership with the member agencies of the JPDO." The Program subsequently established two new projects to meet this objective: the NextGen-Airspace Project and the NextGen-Airportal Project. Together, the projects will also focus NASA s technical expertise and world-class facilities to address the question of where, when, how and the extent to which automation can be applied to moving aircraft safely and efficiently through the NAS and technologies that address optimal allocation of ground and air technologies necessary for NextGen. Additionally, the roles and responsibilities of humans and automation influence in the NAS will be addressed by both projects. Foundational concept and technology research and development begun under the NextGen-Airspace and NextGen-Airportal projects will continue. There will be no change in NASA Research Announcement (NRA) strategy, nor will there be any change to NASA interfaces with the JPDO, Federal Aviation Administration (FAA), Research Transition Teams (RTTs), or other stakeholders
NASA Technical Reports Server (NTRS)
Aquilina, Rudolph A.
2015-01-01
The SMART-NAS Testbed for Safe Trajectory Based Operations Project will deliver an evaluation capability, critical to the ATM community, allowing full NextGen and beyond-NextGen concepts to be assessed and developed. To meet this objective a strong focus will be placed on concept integration and validation to enable a gate-to-gate trajectory-based system capability that satisfies a full vision for NextGen. The SMART-NAS for Safe TBO Project consists of six sub-projects. Three of the sub-projects are focused on exploring and developing technologies, concepts and models for evolving and transforming air traffic management operations in the ATM+2 time horizon, while the remaining three sub-projects are focused on developing the tools and capabilities needed for testing these advanced concepts. Function Allocation, Networked Air Traffic Management and Trajectory Based Operations are developing concepts and models. SMART-NAS Test-bed, System Assurance Technologies and Real-time Safety Modeling are developing the tools and capabilities to test these concepts. Simulation and modeling capabilities will include the ability to assess multiple operational scenarios of the national airspace system, accept data feeds, allowing shadowing of actual operations in either real-time, fast-time and/or hybrid modes of operations in distributed environments, and enable integrated examinations of concepts, algorithms, technologies, and NAS architectures. An important focus within this project is to enable the development of a real-time, system-wide safety assurance system. The basis of such a system is a continuum of information acquisition, analysis, and assessment that enables awareness and corrective action to detect and mitigate potential threats to continuous system-wide safety at all levels. This process, which currently can only be done post operations, will be driven towards "real-time" assessments in the 2035 time frame.
Application of aircraft navigation sensors to enhanced vision systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.
1993-01-01
In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.
NASA Astrophysics Data System (ADS)
Näsilä, Antti; Holmlund, Christer; Mannila, Rami; Näkki, Ismo; Ojanen, Harri J.; Akujärvi, Altti; Saari, Heikki; Fussen, Didier; Pieroux, Didier; Demoulin, Philippe
2016-10-01
PICASSO - A PICo-satellite for Atmospheric and Space Science Observations is an ESA project led by the Belgian Institute for Space Aeronomy, in collaboration with VTT Technical Research Centre of Finland Ltd, Clyde Space Ltd. (UK) and Centre Spatial de Liège (BE). The test campaign for the engineering model of the PICASSO VISION instrument, a miniaturized nanosatellite spectral imager, has been successfully completed. The test results look very promising. The proto-flight model of VISION has also been successfully integrated and it is waiting for the final integration to the satellite platform.
Hubble Space Telescope: the new telemetry archiving system
NASA Astrophysics Data System (ADS)
Miebach, Manfred P.
2000-07-01
The Hubble Space Telescope (HST), the first of NASA's Great Observatories, was launched on April 24, 1990. The HST was designed for a minimum fifteen-year mission with on-orbit servicing by the Space Shuttle System planned at approximately three-year intervals. Major changes to the HST ground system have been implemented for the third servicing mission in December 1999. The primary objectives of the ground system re- engineering effort, a project called 'Vision 2000 Control Center System (CCS),' are to reduce both development and operating costs significantly for the remaining years of HST's lifetime. Development costs are reduced by providing a more modern hardware and software architecture and utilizing commercial off the shelf (COTS) products wherever possible. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on current Data Warehouse technology. The Data Warehouse (Red Brick), as implemented in the CCS Ground System that operates and monitors the Hubble Space Telescope, represents the first use of a commercial Data Warehouse to manage engineering data. The purpose of this data store is to provide a common data source of telemetry data for all HST subsystems. This data store will become the engineering data archive and will provide a queryable database for the user to analyze HST telemetry. The access to the engineering data in the Data Warehouse is platform-independent from an office environment using commercial standards (Unix, Windows98/NT). The latest Internet technology is used to reach the HST engineering community. A WEB-based user interface allows easy access to the data archives. This paper will provide a CCS system overview and will illustrate some of the CCS telemetry capabilities: in particular the use of the new Telemetry Archiving System. Vision 20001 is an ambitious project, but one that is well under way. It will allow the HST program to realize reduced operations costs for the Third Servicing Mission and beyond.
FLORA™: Phase I development of a functional vision assessment for prosthetic vision users
Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy
2014-01-01
Background Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population in order to evaluate the impact of new vision restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional vision ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. Methods The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional vision tasks for observation of performance, and a case narrative summary. Results were analyzed to determine whether the interview questions and functional vision tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, twenty-six subjects were assessed with the FLORA. Seven different evaluators administered the assessment. Results All 14 interview questions were asked. All 35 functional vision tasks were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options -- impossible (33%), difficult (23%), moderate (24%) and easy (19%) -- were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with vision only occurring 75% on average with the System ON, and 29% with the System OFF. Conclusion The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as a functional vision and well-being assessment tool. PMID:25675964
Short-Term Neural Adaptation to Simultaneous Bifocal Images
Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Sawides, Lucie; Marcos, Susana
2014-01-01
Simultaneous vision is an increasingly used solution for the correction of presbyopia (the age-related loss of ability to focus near images). Simultaneous Vision corrections, normally delivered in the form of contact or intraocular lenses, project on the patient's retina a focused image for near vision superimposed with a degraded image for far vision, or a focused image for far vision superimposed with the defocused image of the near scene. It is expected that patients with these corrections are able to adapt to the complex Simultaneous Vision retinal images, although the mechanisms or the extent to which this happens is not known. We studied the neural adaptation to simultaneous vision by studying changes in the Natural Perceived Focus and in the Perceptual Score of image quality in subjects after exposure to Simultaneous Vision. We show that Natural Perceived Focus shifts after a brief period of adaptation to a Simultaneous Vision blur, similar to adaptation to Pure Defocus. This shift strongly correlates with the magnitude and proportion of defocus in the adapting image. The magnitude of defocus affects perceived quality of Simultaneous Vision images, with 0.5 D defocus scored lowest and beyond 1.5 D scored “sharp”. Adaptation to Simultaneous Vision shifts the Perceptual Score of these images towards higher rankings. Larger improvements occurred when testing simultaneous images with the same magnitude of defocus as the adapting images, indicating that wearing a particular bifocal correction improves the perception of images provided by that correction. PMID:24664087
2020 Vision for Tank Waste Cleanup (One System Integration) - 12506
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harp, Benton; Charboneau, Stacy; Olds, Erik
2012-07-01
The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive andmore » extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations of the WTP are not only dependent upon the successful design and construction of the WTP, but also on appropriately preparing the tank farms and waste feed delivery infrastructure to reliably and consistently deliver waste feed to the WTP for many decades. The key components of the 2020 vision are: all WTP facilities are commissioned, turned-over and operational, achieving the earliest possible hot operations of completed WTP facilities, and supplying low-activity waste (LAW) feed directly to the LAW Facility using in-tank/near tank supplemental treatment technologies. A One System Integrated Project Team (IPT) was recently formed to focus on developing and executing the programs that will be critical to successful waste feed delivery and WTP startup. The team is comprised of members from Bechtel National, Inc. (BNI), Washington River Protection Solutions LLC (WRPS), and DOE-ORP and DOE-WTP. The IPT will combine WTP and WRPS capabilities in a mission-focused model that is clearly defined, empowered and cost efficient. The genesis for this new team and much of the 2020 vision is based on the work of an earlier team that was tasked with identifying the optimum approach to startup, commissioning, and turnover of WTP facilities for operations. This team worked backwards from 2020 - a date when the project will be completed and steady-state operations will be underway - and identified success criteria to achieving safe and efficient operations of the WTP. The team was not constrained by any existing contract work scope, labor, or funding parameters. Several essential strategies were identified to effectively realize the one-system model of integrated feed stream delivery, WTP operations, and product delivery, and to accomplish the team's vision of hot operations beginning in 2016: - Use a phased startup and turnover approach that will allow WTP facilities to be transitioned to an operational state on as short a timeline as credible. - Align Tank Farm (TF) and WTP objectives such that feed can be supplied to the WTP when it is required for hot operations. - Ensure immobilized waste and waste recycle streams can be received by the TF when required to support 2016 production of immobilized low-activity waste (ILAW). - Ensure the required baseline and additional funding is provided beginning in fiscal year 2011. - Modify TF and WTP contracts to adequately address this vision. The 2020 Vision provides a summary of strategies and key actions that optimize the approach to startup, commissioning, and turnover of WTP facilities. This vision focuses on the legally enforceable requirement to achieve the Consent Decree milestones of starting radioactive operations in 2019, and achieving initial WTP operations in 2022. (authors)« less
Night vision: changing the way we drive
NASA Astrophysics Data System (ADS)
Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.
2001-03-01
A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.
A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)
Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon
1990-01-01
Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...
Vision-mediated interaction with the Nottingham caves
NASA Astrophysics Data System (ADS)
Ghali, Ahmed; Bayomi, Sahar; Green, Jonathan; Pridmore, Tony; Benford, Steve
2003-05-01
The English city of Nottingham is widely known for its rich history and compelling folklore. A key attraction is the extensive system of caves to be found beneath Nottingham Castle. Regular guided tours are made of the Nottingham caves, during which castle staff tell stories and explain historical events to small groups of visitors while pointing out relevant cave locations and features. The work reported here is part of a project aimed at enhancing the experience of cave visitors, and providing flexible storytelling tools to their guides, by developing machine vision systems capable of identifying specific actions of guides and/or visitors and triggering audio and/or video presentations as a result. Attention is currently focused on triggering audio material by directing the beam of a standard domestic flashlight towards features of interest on the cave wall. Cameras attached to the walls or roof provide image sequences within which torch light and cave features are detected and their relative positions estimated. When a target feature is illuminated the corresponding audio response is generated. We describe the architecture of the system, its implementation within the caves and the results of initial evaluations carried out with castle guides and members of the public.
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2006-01-01
A usability study evaluating dynamic tunnel concepts has been completed under the Aviation Safety and Security Program, Synthetic Vision Systems Project. The usability study was conducted in the Visual Imaging Simulator for Transport Aircraft Systems (VISTAS) III simulator in the form of questionnaires and pilot-in-the-loop simulation sessions. Twelve commercial pilots participated in the study to determine their preferences via paired comparisons and subjective rankings regarding the color, line thickness and sensitivity of the dynamic tunnel. The results of the study showed that color was not significant in pilot preference paired comparisons or in pilot rankings. Line thickness was significant for both pilot preference paired comparisons and in pilot rankings. The preferred line/halo thickness combination was a line width of 3 pixels and a halo of 4 pixels. Finally, pilots were asked their preference for the current dynamic tunnel compared to a less sensitive dynamic tunnel. The current dynamic tunnel constantly gives feedback to the pilot with regard to path error while the less sensitive tunnel only changes as the path error approaches the edges of the tunnel. The tunnel sensitivity comparison results were not statistically significant.
12 strategies for managing capital projects.
Stoudt, Richard L
2013-05-01
To reduce the amount of time and cost associated with capital projects, healthcare leaders should: Begin the project with a clear objective and a concise master facilities plan. Select qualified team members who share the vision of the owner. Base the size of the project on a conservative business plan. Minimize incremental program requirements. Evaluate the cost impact of the building footprint. Consider alternative delivery methods.
Machine Vision Systems for Processing Hardwood Lumber and Logs
Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline
1992-01-01
Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...
Machine vision system for inspecting characteristics of hybrid rice seed
NASA Astrophysics Data System (ADS)
Cheng, Fang; Ying, Yibin
2004-03-01
Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.
In-process fault detection for textile fabric production: onloom imaging
NASA Astrophysics Data System (ADS)
Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til
2011-05-01
Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.
AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.
Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott
2014-11-01
This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.
ERIC Educational Resources Information Center
Ajuwon, Paul M.; Oyinlade, A. Olu
2016-01-01
In this project, the authors used the Essential Behavioral Leadership Qualities (EBLQ) method of measuring leadership effectiveness to assess and compare the effectiveness of principals (leaders) of residential schools for children with blindness or low vision in the United States (U.S.) and Nigeria. A total of 248 teachers (subordinates) in 25…
Video rate color region segmentation for mobile robotic applications
NASA Astrophysics Data System (ADS)
de Cabrol, Aymeric; Bonnin, Patrick J.; Hugel, Vincent; Blazevic, Pierre; Chetto, Maryline
2005-08-01
Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.
Coupling sensing to crop models for closed-loop plant production in advanced life support systems
NASA Astrophysics Data System (ADS)
Cavazzoni, James; Ling, Peter P.
1999-01-01
We present a conceptual framework for coupling sensing to crop models for closed-loop analysis of plant production for NASA's program in advanced life support. Crop status may be monitored through non-destructive observations, while models may be independently applied to crop production planning and decision support. To achieve coupling, environmental variables and observations are linked to mode inputs and outputs, and monitoring results compared with model predictions of plant growth and development. The information thus provided may be useful in diagnosing problems with the plant growth system, or as a feedback to the model for evaluation of plant scheduling and potential yield. In this paper, we demonstrate this coupling using machine vision sensing of canopy height and top projected canopy area, and the CROPGRO crop growth model. Model simulations and scenarios are used for illustration. We also compare model predictions of the machine vision variables with data from soybean experiments conducted at New Jersey Agriculture Experiment Station Horticulture Greenhouse Facility, Rutgers University. Model simulations produce reasonable agreement with the available data, supporting our illustration.
Decadal Vision Progress Report Implementation Plans and Status for the Next Generation ARM Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, James
The reconfiguration of the ARM facility, formally initiated in early 2014, is geared toward implementing the Next Generation of the ARM Facility, which will more tightly link ARM measurements and atmospheric models. The strategy is outlined in the ARM Climate Research Facility Decadal Vision (DOE 2014a). The strategy includes the implementation of a high-resolution model, initially at the Southern Great Plains (SGP) site, and enhancements at the SGP and North Slope of Alaska (NSA) sites to provide additional observations to support modeling and process studies. Enhancements at the SGP site focus on ground-based instruments while enhancements at the NSA makemore » use of Unmanned Aerial Systems (UAS) and Tethered Balloon Systems (TBS). It is also recognized that new data tools and data products will need to be developed to take full advantage of these improvements. This document provides an update on the status of these ARM facility enhancements, beginning with the measurement enhancements at the SGP and NSA, followed by a discussion of the modeling project including associated data-processing activities.« less
PROJECTIONS OFF FRACTAL FUNCTIONS: A NEW VISION OF NATURE'S COMPLEXITY. (R824780)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
A zero waste vision for industrial networks in Europe.
Curran, T; Williams, I D
2012-03-15
'ZeroWIN' (Towards Zero Waste in Industrial Networks--www.zerowin.eu) is a five year project running 2009-2014, funded by the EC under the 7th Framework Programme. Project ZeroWIN envisions industrial networks that have eliminated the wasteful consumption of resources. Zero waste is a unifying concept for a range of measures aimed at eliminating waste and challenging old ways of thinking. Aiming for zero waste will mean viewing waste as a potential resource with value to be realised, rather than as a problem to be dealt with. The ZeroWIN project will investigate and demonstrate how existing approaches and tools can be improved and combined to best effect in an industrial network, and how innovative technologies can contribute to achieving the zero waste vision. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Chavez, Carlos; Hammel, Bruce; Hammel, Allan; Moore, John R.
2014-01-01
Unmanned Aircraft Systems (UAS) represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the National Airspace System (NAS). To address this deficiency, NASA has established a project called UAS Integration in the NAS (UAS in the NAS), under the Integrated Systems Research Program (ISRP) of the Aeronautics Research Mission Directorate (ARMD). This project provides an opportunity to transition concepts, technology, algorithms, and knowledge to the Federal Aviation Administration (FAA) and other stakeholders to help them define the requirements, regulations, and issues for routine UAS access to the NAS. The safe, routine, and efficient integration of UAS into the NAS requires new radio frequency (RF) spectrum allocations and a new data communications system which is both secure and scalable with increasing UAS traffic without adversely impacting the Air Traffic Control (ATC) communication system. These data communications, referred to as Control and Non-Payload Communications (CNPC), whose purpose is to exchange information between the unmanned aircraft and the ground control station to ensure safe, reliable, and effective unmanned aircraft flight operation. A Communications Subproject within the UAS in the NAS Project has been established to address issues related to CNPC development, certification and fielding. The focus of the Communications Subproject is on validating and allocating new RF spectrum and data link communications to enable civil UAS integration into the NAS. The goal is to validate secure, robust data links within the allocated frequency spectrum for UAS. A vision, architectural concepts, and seed requirements for the future commercial UAS CNPC system have been developed by RTCA Special Committee 203 (SC-203) in the process of determining formal recommendations to the FAA in its role provided for under the Federal Advisory Committee Act. NASA intends to conduct its research and development in keeping with this vision and associated architectural concepts. The prototype communication systems developed and tested by NASA will be used to validate and update the initial SC-203 requirements in order to provide a foundation for SC-203's Minimum Aviation System Performance Standards (MASPS).
NASA Technical Reports Server (NTRS)
Brooks, Rodney Allen; Stein, Lynn Andrea
1994-01-01
We describe a project to capitalize on newly available levels of computational resources in order to understand human cognition. We will build an integrated physical system including vision, sound input and output, and dextrous manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system will learn to 'think' by building on its bodily experiences to accomplish progressively more abstract tasks. Past experience suggests that in attempting to build such an integrated system we will have to fundamentally change the way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of intelligence. We expect to be able to better reconcile the theories that will be developed with current work in neuroscience.
A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)
Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon
1992-01-01
Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
Z Alotaibi, Abdullah
2015-10-20
Vision is the ability of seeing with a definite understanding of features, color and contrast, and to distinguish between objects visually. In the year 1999, the World Health Organization (WHO) and the International Agency for the Prevention of Blindness formulated a worldwide project for the eradication of preventable loss of sight with the subject of "Vision 2020: the Right to Sight". This global program aims to eradicate preventable loss of sight by the year 2020. This study was conducted to determine the main causes of low vision in Saudi Arabia and also to assess their visual improvement after using low vision aids (LVD).The study is a retrospective study and was conducted in low vision clinic at Eye World Medical Complex in Riyadh, Saudi Arabia. The file medical record of 280 patients attending low vision clinics from February 2008 to June 2010 was included. A data sheet was filled which include: age, gender, cause of low vision, unassisted visual acuity for long distances and short distances, low vision devices needed for long distances and short distances that provides best visual acuity. The result shows that the main cause of low vision was Optic atrophy (28.9%). Retinitis pigmentosa was the second cause of low vision, accounting for 73 patients (26%) followed by Diabetic retinopathy and Macular degeneration with 44 patients (15.7%) and 16 patients (5.7%) respectively. Inter family marriage could be one of the main causes of low vision. Public awareness should be embarked on for enlightenment on ocular diseases result in consanguineous marriage. Also, it is an important issue to start establishing low vision clinics in order to improve the situation.
NASA's Plans for Developing Life Support and Environmental Monitoring and Control Systems
NASA Technical Reports Server (NTRS)
Lawson, B. Michael; Jan, Darrell
2006-01-01
Life Support and Monitoring have recently been reworked in response to the Vision for Space Exploration. The Exploration Life Support (ELS) Project has replaced the former Advanced Life Support Element of the Human Systems Research and Technology Office. Major differences between the two efforts include: the separation of thermal systems into a new stand alone thermal project, deferral of all work in the plant biological systems, relocation of food systems to another organization, an addition of a new project called habitation systems, and overall reduction in the number of technology options due to lower funding. The Advanced Environmental Monitoring and Control (AEMC) Element is retaining its name but changing its focus. The work planned in the ELS and AEMC projects is organized around the three major phases of the Exploration Program. The first phase is the Crew Exploration Vehicle (CEV). The ELS and AEMC projects will develop hardware for this short duration orbital and trans-lunar vehicle. The second phase is sortie landings on the moon. Life support hardware for lunar surface access vehicles including upgrades of the CEV equipment and technologies which could not be pursued in the first phase due to limited time and budget will be developed. Monitoring needs will address lunar dust issues, not applicable to orbital needs. The ELS and AEMC equipment is of short duration, but has different environmental considerations. The third phase will be a longer duration lunar outpost. This will consist of a new set of hardware developments better suited for long duration life support and associated monitoring needs on the lunar surface. The presentation will show the planned activities and technologies that are expected to be developed by the ELS and AEMC projects for these program phases.
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
3-D Signal Processing in a Computer Vision System
Dongping Zhu; Richard W. Conners; Philip A. Araman
1991-01-01
This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...
Intensity measurement of automotive headlamps using a photometric vision system
NASA Astrophysics Data System (ADS)
Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.
1996-01-01
Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.
A New Vision for the First Amendment in Schools.
ERIC Educational Resources Information Center
Chaltain, Sam
2002-01-01
Describes the First Amendment Schools project aimed at teaching K-12 public and independent school students their constitutionally protected religious, speech, press, assembly, and petition rights and responsibilities. Includes examples describing the project in several schools. Includes annotated list of resources for educators. (PKP)
10 CFR 603.1010 - Substantive issues.
Code of Federal Regulations, 2011 CFR
2011-01-01
.... The scope is an overall vision statement for the project, including a discussion of the project's... minimum required Federal Government rights in intellectual property generated under the award and address... disposition of tangible property. The property provisions for for-profit and nonprofit participants must be in...
Hyper-Spectral Networking Concept of Operations and Future Air Traffic Management Simulations
NASA Technical Reports Server (NTRS)
Davis, Paul; Boisvert, Benjamin
2017-01-01
The NASA sponsored Hyper-Spectral Communications and Networking for Air Traffic Management (ATM) (HSCNA) project is conducting research to improve the operational efficiency of the future National Airspace System (NAS) through diverse and secure multi-band, multi-mode, and millimeter-wave (mmWave) wireless links. Worldwide growth of air transportation and the coming of unmanned aircraft systems (UAS) will increase air traffic density and complexity. Safe coordination of aircraft will require more capable technologies for communications, navigation, and surveillance (CNS). The HSCNA project will provide a foundation for technology and operational concepts to accommodate a significantly greater number of networked aircraft. This paper describes two of the HSCNA projects technical challenges. The first technical challenge is to develop a multi-band networking concept of operations (ConOps) for use in multiple phases of flight and all communication link types. This ConOps will integrate the advanced technologies explored by the HSCNA project and future operational concepts into a harmonized vision of future NAS communications and networking. The second technical challenge discussed is to conduct simulations of future ATM operations using multi-bandmulti-mode networking and technologies. Large-scale simulations will assess the impact, compared to todays system, of the new and integrated networks and technologies under future air traffic demand.
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan
2017-06-01
The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.
X-Eye: a novel wearable vision system
NASA Astrophysics Data System (ADS)
Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye
2011-03-01
This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.
An embedded vision system for an unmanned four-rotor helicopter
NASA Astrophysics Data System (ADS)
Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James
2006-10-01
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
The eye as metronome of the body.
Lubkin, Virginia; Beizai, Pouneh; Sadun, Alfredo A
2002-01-01
Vision is much more than just resolving small objects. In fact, the eye sends visual information to the brain that is not consciously perceived. One such pathway entails visual information to the hypothalamus. The retinohypothalamic tract (RHT) mediates light entrainment of circadian rhythms. Retinofugal fibers project to several nuclei of the hypothalamus. These and further projections to the pineal via the sympathetic system provide the anatomical substrate for the neuro-endocrine control of diurnal and longer rhythms. Without the influence of light and dark, many rhythms desynchronize and exhibit free-running periods of approximately 24.2-24.9 hours in humans. This review will demonstrate the mechanism by which the RHT synchronizes circadian rhythms and the importance of preserving light perception in those persons with impending visual loss.
An Introduction to Flight Software Development: FSW Today, FSW 2010
NASA Technical Reports Server (NTRS)
Gouvela, John
2004-01-01
Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by automated office assistants. The infrastructure in use today includes strict software development and configuration management procedures, including strong control of resource management and critical skills coverage. This will evolve to a fully integrated staff organization with efficient and effective communication throughout all levels guided by a Mission-Systems Architecture framework with focus on risk management and attention toward inevitable product obsolescence. This infrastructure of computing equipment, software and processes will itself be subject to technological change and need for management of change and improvement,
New Directions for NASA's Advanced Life Support Program
NASA Technical Reports Server (NTRS)
Barta, Daniel J.
2006-01-01
Advanced Life Support (ALS), an element of Human Systems Research and Technology s (HSRT) Life Support and Habitation Program (LSH), has been NASA s primary sponsor of life support research and technology development for the agency. Over its history, ALS sponsored tasks across a diverse set of institutions, including field centers, colleges and universities, industry, and governmental laboratories, resulting in numerous publications and scientific articles, patents and new technologies, as well as education and training for primary, secondary and graduate students, including minority serving institutions. Prior to the Vision for Space Exploration (VSE) announced on January 14th, 2004 by the President, ALS had been focused on research and technology development for long duration exploration missions, emphasizing closed-loop regenerative systems, including both biological and physicochemical. Taking a robust and flexible approach, ALS focused on capabilities to enable visits to multiple potential destinations beyond low Earth orbit. ALS developed requirements, reference missions, and assumptions upon which to structure and focus its development program. The VSE gave NASA a plan for steady human and robotic space exploration based on specific, achievable goals. Recently, the Exploration Systems Architecture Study (ESAS) was chartered by NASA s Administrator to determine the best exploration architecture and strategy to implement the Vision. The study identified key technologies required to enable and significantly enhance the reference exploration missions and to prioritize near-term and far-term technology investments. This technology assessment resulted in a revised Exploration Systems Mission Directorate (ESMD) technology investment plan. A set of new technology development projects were initiated as part of the plan s implementation, replacing tasks previously initiated under HSRT and its sister program, Exploration Systems Research and Technology (ESRT). The Exploration Life Support (ELS) Project, under the Exploration Technology Development Program, has recently been initiated to perform directed life support technology development in support of Constellation and the Crew Exploration Vehicle (CEV). ELS) has replaced ALS, with several major differences. Thermal Control Systems have been separated into a new stand alone project (Thermal Systems for Exploration Missions). Tasks in Advanced Food Technology have been relocated to the Human Research Program. Tasks in a new discipline area, Habitation Engineering, have been added. Research and technology development for capabilities required for longer duration stays on the Moon and Mars, including bioregenerative system, have been deferred.
NASA Technical Reports Server (NTRS)
Sanders, Gerald B.; Larson, William E.
2012-01-01
Incorporation of In-Situ Resource Utilization (ISRU) and the production of mission critical consumables for 9 propulsion, power, and life support into mission architectures can greatly reduce the mass, cost, and risk of missions 10 leading to a sustainable and affordable approach to human exploration beyond Earth. ISRU and its products can 11 also greatly affect how other exploration systems are developed, including determining which technologies are 12 important or enabling. While the concept of lunar ISRU has existed for over 40 years, the technologies and systems 13 had not progressed much past simple laboratory proof-of-concept tests. With the release of the Vision for Space 14 Exploration in 2004 with the goal of harnessing the Moon.s resources, NASA initiated the ISRU Project in the 15 Exploration Technology Development Program (ETDP) to develop the technologies and systems needed to meet 16 this goal. In the five years of work in the ISRU Project, significant advancements and accomplishments occurred in 17 several important areas of lunar ISRU. Also, two analog field tests held in Hawaii in 2008 and 2010 demonstrated 18 all the steps in ISRU capabilities required along with the integration of ISRU products and hardware with 19 propulsion, power, and cryogenic storage systems. This paper will review the scope of the ISRU Project in the 20 ETDP, ISRU incorporation and development strategies utilized by the ISRU Project, and ISRU development and 21 test accomplishments over the five years of funded project activity.
Real-time adaptive off-road vehicle navigation and terrain classification
NASA Astrophysics Data System (ADS)
Muller, Urs A.; Jackel, Lawrence D.; LeCun, Yann; Flepp, Beat
2013-05-01
We are developing a complete, self-contained autonomous navigation system for mobile robots that learns quickly, uses commodity components, and has the added benefit of emitting no radiation signature. It builds on the autonomous navigation technology developed by Net-Scale and New York University during the Defense Advanced Research Projects Agency (DARPA) Learning Applied to Ground Robots (LAGR) program and takes advantage of recent scientific advancements achieved during the DARPA Deep Learning program. In this paper we will present our approach and algorithms, show results from our vision system, discuss lessons learned from the past, and present our plans for further advancing vehicle autonomy.
Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft
2017-06-01
International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
Reconfigurable vision system for real-time applications
NASA Astrophysics Data System (ADS)
Torres-Huitzil, Cesar; Arias-Estrada, Miguel
2002-03-01
Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.
Science and applications on the space station: A strategic vision
NASA Technical Reports Server (NTRS)
1988-01-01
The central themes relating to science and applications on the Space Station for fiscal year 1989 are discussed. Materials science research is proposed in a wide variety of subfields including protein crystal growth, metallurgy, and properties of fluids. Also proposed are the U.S. Polar Platform, an Extended Duration Crew Operations Project, and a long-range Space Biology Research Project to investigate plant and animal physiology, gravitational biology, life support systems, and exobiology. The exterior of the Space Station will provide attachment points for payloads to study subjects such as the earth and its environment, the sun, other bodies in the solar system, and cosmic objects. Examples of such attached payloads are given. They include a plasma interaction monitoring system, observation of solar features and properties, studies of particle radiation from the sun, cosmic dust collection and analysis, surveys of various cosmic and solar rays, measurements of rainfall and wind and the study of global changes on earth.
2013-11-01
Acoustic Measurement and Model Predictions for the Aural Nondetectability of Two Night-Vision Goggles by Jeremy Gaston, Tim Mermagen, and...Goggles Jeremy Gaston, Tim Mermagen, and Kelly Dickerson Human Research and Engineering Directorate, ARL...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jeremy Gaston, Tim Mermagen, and Kelly Dickerson 5d. PROJECT NUMBER 74A 5e. TASK NUMBER 5f. WORK
Joint Vision for the Korean Peninsula -- Can We Get There?
2012-03-11
complex problem that requires a multifaceted approach. Trilateral cooperation with China coupled with all the elements of the Alliance’s elements of...national power can set the conditions for the Joint Vision Statement to become a reality in this century. 15. SUBJECT TERMS Northeast Asia, China ...We Get There? FORMAT: Strategy Research Project DATE: 11 March 2012 WORD COUNT: 5,917 PAGES: 30 KEY TERMS: Northeast Asia, China
A Vision Too Far? Mapping the Space for a High Skills Project in the UK
ERIC Educational Resources Information Center
Lloyd, Caroline; Payne, Jonathan
2005-01-01
Although the current Labour government is committed to developing the UK as a high skills society, there is much confusion as what such a society might look like and from where it might draw its inspiration. Some academic commentators have also expressed the need for a clearer vision of the kind of society to which the UK might choose to head for…
Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar
2004-07-01
In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.
Latency in Visionic Systems: Test Methods and Requirements
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.
2005-01-01
A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour.
Liu, Bao-Hua; Huberman, Andrew D; Scanziani, Massimo
2016-10-20
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei, cortical lesions have suggested that the visual cortex might also be involved. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function, to plastically adapt the execution of innate motor behaviours.
Function-based design process for an intelligent ground vehicle vision system
NASA Astrophysics Data System (ADS)
Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.
2010-10-01
An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
3D shape measurement with thermal pattern projection
NASA Astrophysics Data System (ADS)
Brahm, Anika; Reetz, Edgar; Schindwolf, Simon; Correns, Martin; Kühmstedt, Peter; Notni, Gunther
2016-12-01
Structured light projection techniques are well-established optical methods for contactless and nondestructive three-dimensional (3D) measurements. Most systems operate in the visible wavelength range (VIS) due to commercially available projection and detection technology. For example, the 3D reconstruction can be done with a stereo-vision setup by finding corresponding pixels in both cameras followed by triangulation. Problems occur, if the properties of object materials disturb the measurements, which are based on the measurement of diffuse light reflections. For example, there are existing materials in the VIS range that are too transparent, translucent, high absorbent, or reflective and cannot be recorded properly. To overcome these challenges, we present an alternative thermal approach that operates in the infrared (IR) region of the electromagnetic spectrum. For this purpose, we used two cooled mid-wave (MWIR) cameras (3-5 μm) to detect emitted heat patterns, which were introduced by a CO2 laser. We present a thermal 3D system based on a GOBO (GOes Before Optics) wheel projection unit and first 3D analyses for different system parameters and samples. We also show a second alternative approach based on an incoherent (heat) source, to overcome typical disadvantages of high-power laser-based systems, such as industrial health and safety considerations, as well as high investment costs. Thus, materials like glass or fiber-reinforced composites can be measured contactless and without the need of additional paintings.
NASA Fixed Wing Project: Green Technologies for Future Aircraft Generation
NASA Technical Reports Server (NTRS)
Del Rosario, Ruben; Koudelka, John M.; Wahls, Rich; Madavan, Nateri
2014-01-01
Commercial aviation relies almost entirely on subsonic fixed wing aircraft to constantly move people and goods from one place to another across the globe. While air travel is an effective means of transportation providing an unmatched combination of speed and range, future subsonic aircraft must improve substantially to meet efficiency and environmental targets.The NASA Fundamental Aeronautics Fixed Wing (FW) Project addresses the comprehensive challenge of enabling revolutionary energy efficiency improvements in subsonic transport aircraft combined with dramatic reductions in harmful emissions and perceived noise to facilitate sustained growth of the air transportation system. Advanced technologies and the development of unconventional aircraft systems offer the potential to achieve these improvements. Multidisciplinary advances are required in aerodynamic efficiency to reduce drag, structural efficiency to reduce aircraft empty weight, and propulsive and thermal efficiency to reduce thrust-specific energy consumption (TSEC) for overall system benefit. Additionally, advances are required to reduce perceived noise without adversely affecting drag, weight, or TSEC, and to reduce harmful emissions without adversely affecting energy efficiency or noise.The paper will highlight the Fixed Wing project vision of revolutionary systems and technologies needed to achieve these challenging goals. Specifically, the primary focus of the FW Project is on the N+3 generation; that is, vehicles that are three generations beyond the current state of the art, requiring mature technology solutions in the 2025-30 timeframe
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa
2004-01-01
The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.
Computer Vision System For Locating And Identifying Defects In Hardwood Lumber
NASA Astrophysics Data System (ADS)
Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.
1989-03-01
This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.
NASA Technical Reports Server (NTRS)
Murray, N. D.
1985-01-01
Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.
Photo screening around the world: Lions Club International Foundation experience.
Donahue, Sean P; Lorenz, Sylvia; Johnson, Tammy
2008-01-01
To describe the use of photoscreening for preschool vision screening in several diverse locations throughout the world. The MTI photo screener was used to screen pre-verbal children; photographs were interpreted using standard criteria. The Tennessee vision screening program remains successful, screening over 200,000 children during the past 8 years. Similar programs modeled across the United States have screened an additional 500,000 children. A pilot demonstration project in Hong Kong, Beijing, and Brazil screened over 5000 additional children with good success and appropriately low referral rates. Photoscreening can be an appropriate technique for widespread vision screening of preschool children throughout the world.
Audible vision for the blind and visually impaired in indoor open spaces.
Yu, Xunyi; Ganz, Aura
2012-01-01
In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.
Relating Standardized Visual Perception Measures to Simulator Visual System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Sweet, Barbara T.
2013-01-01
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
Recommendations for the Implementation of the LASSO Workflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I; Vogelmann, Andrew M; Cheng, Xiaoping
The U. S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Research Fa-cility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability, envisioned in the ARM Decadal Vision (U.S. Department of Energy 2014), subsequently has been named the Large-Eddy Simu-lation (LES) ARM Symbiotic Simulation and Observation (LASSO) project, and it has an initial focus of shallow convection at the ARM Southern Great Plains (SGP) atmospheric observatory. This report documents the recommendations resulting from the pilot project to be considered by ARM for imple-mentation into routinemore » operations. During the pilot phase, LASSO has evolved from the initial vision outlined in the pilot project white paper (Gustafson and Vogelmann 2015) to what is recommended in this report. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso. Feedback regarding LASSO and the recommendations in this report can be directed to William Gustafson, the project principal investigator (PI), and Andrew Vogelmann, the co-principal investigator (Co-PI), via lasso@arm.gov.« less
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
Three-dimensional vision enhances task performance independently of the surgical method.
Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A
2012-10-01
Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valin, Hugo; Sands, Ronald; van der Mensbrugghe, Dominique
Understanding the capacity of agricultural systems to feed the world population under climate change requires a good prospective vision on the future development of food demand. This paper reviews modeling approaches from ten global economic models participating to the AgMIP project, in particular the demand function chosen and the set of parameters used. We compare food demand projections at the horizon 2050 for various regions and agricultural products under harmonized scenarios. Depending on models, we find for a business as usual scenario (SSP2) an increase in food demand of 59-98% by 2050, slightly higher than FAO projection (54%). The prospectivemore » for animal calories is particularly uncertain with a range of 61-144%, whereas FAO anticipates an increase by 76%. The projections reveal more sensitive to socio-economic assumptions than to climate change conditions or bioenergy development. When considering a higher population lower economic growth world (SSP3), consumption per capita drops by 9% for crops and 18% for livestock. Various assumptions on climate change in this exercise do not lead to world calorie losses greater than 6%. Divergences across models are however notable, due to differences in demand system, income elasticities specification, and response to price change in the baseline.« less
NASA Technical Reports Server (NTRS)
Smith, Fred; Perry, Jay; Nalette, Tim; Papale, William
2006-01-01
Under a NASA-sponsored technology development project, a multi-disciplinary team consisting of industry, academia, and government organizations lead by Hamilton Sundstrand is developing an amine-based humidity and CO2 removal process and prototype equipment for Vision for Space Exploration (VSE) applications. Originally this project sought to research enhanced amine formulations and incorporate a trace contaminant control capability into the sorbent. In October 2005, NASA re-directed the project team to accelerate the delivery of hardware by approximately one year and emphasize deployment on board the Crew Exploration Vehicle (CEV) as the near-term developmental goal. Preliminary performance requirements were defined based on nominal and off-nominal conditions and the design effort was initiated using the baseline amine sorbent, SA9T. As part of the original project effort, basic sorbent development was continued with the University of Connecticut and dynamic equilibrium trace contaminant adsorption characteristics were evaluated by NASA. This paper summarizes the University sorbent research effort, the basic trace contaminant loading characteristics of the SA9T sorbent, design support testing, and the status of the full-scale system hardware design and manufacturing effort.
External Vision Systems (XVS) Proof-of-Concept Flight Test Evaluation
NASA Technical Reports Server (NTRS)
Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Arthur, Jarvis J.; Prinzel, Lawrence, III; Bailey, Randall E.
2014-01-01
NASA's Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today's aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley's UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight - one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.
The precision measurement and assembly for miniature parts based on double machine vision systems
NASA Astrophysics Data System (ADS)
Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.
2015-02-01
In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.
PILOT PROJECT CLOSE UP: ORD RESEARCH INVENTORY
Harvey, Jim and Elin Ulrich. 2004. Pilot Project Close Up: ORD Research Inventory. Changing Times. Pp. 1. (ERL,GB R1022).
At the January 2003 summit, many people were drawn to our vision of improving ORD's internal communications by creating a "go-to" page that consolicat...
The Urban Mission: Linking Fresno State and the Community
ERIC Educational Resources Information Center
Culver-Dockins, Natalie; McCarthy, Mary Ann; Brogan, Amy; Karsevar, Kent; Tatsumura, Janell; Whyte, Jenny; Woods, R. Sandie
2011-01-01
The "four spheres" model of transformation, as viewed through the lens of the urban mission of California State University, Fresno, is examined through current projects in economic development, infrastructure development, human development, and the fourth sphere, which encompasses the broad vision. Local projects will be highlighted.
Anderson, Jahue
2011-01-01
This is the story of failure: in this case, an irrigation project that never met its boosters' expectations. Between 1880 and 1930, Wichita Falls entrepreneur Joseph Kemp dreamed of an agrarian Eden on the Texas rolling plains. Kemp promoted reclamation and conservation and envisioned the Big Wichita River Valley as the "Irrigated Valley." But the process of bringing dams and irrigation ditches to the Big Wichita River ignored knowledge of the river and local environment, which ultimately was key to making these complex systems work. The boosters faced serious ecological limitations and political obstacles in their efforts to conquer water, accomplishing only parts of the grandiose vision. Ultimately, salty waters and poor drainage doomed the project. While the livestock industry survived and the oil business thrived in the subsequent decades, the dream of idyllic irrigated farmsteads slowly disappeared.
Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display
NASA Astrophysics Data System (ADS)
Long, David L.
Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE's 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers.
Computer vision for foreign body detection and removal in the food industry
USDA-ARS?s Scientific Manuscript database
Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Using Vision System Technologies for Offset Approaches in Low Visibility Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.
2015-01-01
Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen
Exploration Life Support Technology Development for Lunar Missions
NASA Technical Reports Server (NTRS)
Ewert, Michael K.; Barta, Daniel J.; McQuillan, Jeffrey
2009-01-01
Exploration Life Support (ELS) is one of NASA's Exploration Technology Development Projects. ELS plans, coordinates and implements the development of new life support technologies for human exploration missions as outlined in NASA's Vision for Space Exploration. ELS technology development currently supports three major projects of the Constellation Program - the Orion Crew Exploration Vehicle (CEV), the Altair Lunar Lander and Lunar Surface Systems. ELS content includes Air Revitalization Systems (ARS), Water Recovery Systems (WRS), Waste Management Systems (WMS), Habitation Engineering, Systems Integration, Modeling and Analysis (SIMA), and Validation and Testing. The primary goal of the ELS project is to provide different technology options to Constellation which fill gaps or provide substantial improvements over the state-of-the-art in life support systems. Since the Constellation missions are so challenging, mass, power, and volume must be reduced from Space Shuttle and Space Station technologies. Systems engineering analysis also optimizes the overall architecture by considering all interfaces with the life support system and potential for reduction or reuse of resources. For long duration missions, technologies which aid in closure of air and water loops with increased reliability are essential as well as techniques to minimize or deal with waste. The ELS project utilizes in-house efforts at five NASA centers, aerospace industry contracts, Small Business Innovative Research contracts and other means to develop advanced life support technologies. Testing, analysis and reduced gravity flight experiments are also conducted at the NASA field centers. This paper gives a current status of technologies under development by ELS and relates them to the Constellation customers who will eventually use them.
An architecture for real-time vision processing
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong
1994-01-01
To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lantz, Eric J.; Mone, Christopher D.; DeMeo, Edgar
IIn March 2015, the U.S. Department of Energy (DOE) released Wind Vision: A New Era for Wind Power in the United States (DOE 2015), which explores a scenario in which wind provides 10 percent of U.S. electricity in 2020, 20 percent in 2030, and 35 percent in 2050. The Wind Vision report also includes a roadmap of recommended actions aimed at pursuit of the vision and its underlying wind-deployment scenario. The roadmap was compiled by the Wind Vision project team, which included representatives from the industrial, electric-power, government-laboratory, academic, environmental-stewardship, regulatory, and permitting stakeholder groups. The roadmap describes high-level activitiesmore » suitable for all sectors with a stake in wind power and energy development. It is intended to be a 'living document,' and DOE expects to engage the wind community from time to time to track progress.« less
Creating a vision for your medical call center.
Barr, J L; Laufenberg, S; Sieckman, B L
1998-01-01
MCC technologies and applications that can have a positive impact on managed care delivery are almost limitless. As you determine your vision, be sure to have in mind the following questions: (1) Do you simply want an efficient front end for receiving calls? (2) Do you want to offer triage services? (3) Is your organization ready for a fully functional "electronic physician's office?" Understand your organization's strategy. Where are you going, not only today but five years from now? That information is essential to determine your vision. Once established, your vision will help determine what you need and whether you should build or outsource. Vendors will assist in cost/benefit analysis of their equipment, but do not lose sight of internal factors such as "prior inclination" costs in the case of a nurse triage program. The technology is available to take your vision to its outer reaches. With the projected increase in utilization of call center services, don't let your organization be left behind!
A Machine Vision Quality Control System for Industrial Acrylic Fibre Production
NASA Astrophysics Data System (ADS)
Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João
2002-12-01
This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.
CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System
Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1991-01-01
Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...
NASA Technical Reports Server (NTRS)
Prinzel, L.J.; Kramer, L.J.
2009-01-01
A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.